ELIPSIS

Interactive Phoneme Synthesizer: Exploring the world of sounds produced by the human vocal apparatus

 

What is it?

ELIPSIS is an open-source toolkit for language education, the study of phonetics, and sound experimentation. It's also a unique resource with which to visualize information in new ways. Our screen-based system maps out the diverse ways that the human vocal apparatus produces its rich inventory of sounds. It will also explore how the brain processes and derives meaning from these sounds.

It consists of a web-based (HTML5) interface, a free standalone application for use on smart phones, touch screens, and tablets, as well as an online resource for the development, archival, and dissemination of the project. The project was created by F. Mathias Lorenz, and is under ongoing development with the technical expertise of Jason Soares.


What does it do?

ELIPSIS allows users to explore the rich inventory of human sounds used by the world's languages. Our system uses a set of universal symbols called the IPA, (or International Phonetic Alphabet). The standard way to view the IPA is with a set of charts that arrange the sounds into an inventory of symbols. These individual sounds, called phonemes, when strung together to form words, are the basic "building blocks" of any spoken language. Think of the IPA as a periodic table of elements, but instead of mapping out chemicals, it maps out the whole range of human sounds like vowels and consonants, as well as rare sounds like pops, clicks, and whistles.

ELIPSIS takes this IPA chart and arranges the symbols into a unique 3-dimensional configuration called a semantic space. A semantic space is a way to categorize information by physically organizing things according to their conceptual relationships with one another. In terms of proximity, things that are similar to each other occupy the same region, and things that are disimilar are further apart. This allows data to be organized into a holism, and is similar to the way that the human brain categorizes information about the world around it.

In the ELIPSIS, similar to in the IPA chart, the phoneme symbols are mapped out according to their specific sounds and places of articulation. However, instead of several 2-dimensional black & white charts that seperate vowels, consonants, and rare "pops and clicks"we've combined the layered additional functionality of colors, into a spherical interface. This way, we can experience a unique configuration of sounds, colors, tones, and symbols that enable users to interact with phonetics in new ways. The user can accesss specific data associated with each phoneme to learn what makes it unique, such as its linguistic description, articulation diagrams, frequency of usage, and energetic values used by phoneticians (such as formants and frequency).


How does it work?

"Point, Click and Learn" mode allows users to listen to spoken samples of phonemes by clicking on individual IPA symbols on the interactive chart. This mode can be used by language teachers, students, and curious individuals to explore the world of sounds and how they are made. When a symbol is triggered, users can also see information about where and how these sounds are produced in the vocal apparatus. Alongside the ELIPSIS diagram, there is a profile of a human head with a dynamic configuration of the vocal apparatus (jaw, palate, tongue, teeth, lips and nasal cavity.) Also visible is a range of specific data such as linguistic description, frequency, and formant values.

Because each IPA ELIPSIS occupies a region of the chart characterized by its own specific color, shape, and sound, the ELIPSIS lends itself to easier learning retention. It can also be used in a wide range of phonetic studies, such as listening comprehension, accent management, mastering dialects, and voice coaching. The interface also can be helpful in identifying synesthesia in young individuals, or in helping to overcome problems in articulation such as speech impediments.

"Language Toggle/Overlay" mode enables users to select specific language subsets of phonemes to be highlighted, such as the ones particular to given languages like English, Japanese or Russian. These unique combinations can be visualized as "phoneme fingerprints" of a given language. This set selection mode can be toggled on or off to allow users to see the phonetic configurations of multiple languages simultaneously. This way, the user can easily visualize which sound sets are shared or unshared between multiple languages, and then focus on those sounds that might be more difficult for a non-native speaker to articulate. The user can then use the additional information in the interface to help pronounce the new sounds by hearing phoneme samples while seeing how the sounds are produced.

Users can also configure their own versions based on target languages, regions of focus, or areas of difficulty they wish to overcome. It can also be used recreationally by anyone interested in experimenting with vocalized sound synthesis.


Where is it going?

There are a number of exciting tasks that the ELIPSIS is being designed to perform. Some of these are closer to being developed than others, given the complexities of programming and the state of interface technologies. Here is where we are going with it...

The first of these tasks is related to the current configuration. Assuming that someone is able to click on

It's also going to serve as a database to archive the amazing range of sounds that can be articulated by the human vocal apparatus. Imagine a platform for the exploration of the world's incredible range of sounds such as non-pulmonic "pops and clicks", throat singing, beat boxing, tonal harmonization etc.

It also has relevance in the filds of accent management, voice coaching, and overcoming problems in articulation and language comprehension.


Where is it from?

The Energy Language Project started out in 1996 as a philosophical experiment to explore the nature of consciousness and it's relationship to human language. Since then, it's grown in scope and magnitude to potentially involve the fields of phonetics, special education, language archiving, and information visualization.

Since becoming my graduate thesis project at NYU in 2003, a number of additional applications have been created to help familiarize people with the system, and to develop an open-source platform for the exchange of knowledge in language education.


  • ELIPIS stands for Energy Language Interactive Phoneme Synthesizing Interface System.
  • Energy because it's based on the tones, vibrations and frequencies of human communication.
  • Language because its goal is to better understand, explore and archive the world's languages.
  • Interactive because it allows users to respond to and manipulate a set of flexible tools in real time.
  • Phoneme because these are the fundamental building blocks of any spoken language system.
  • Synthesizing because it enables the breaking down and re-production of sounds, colors and tones.
  • Interface because the system affords users the opportunity to interact with a single control surface.
  • System because it integrates a series of disparate tools into a coherent unified arrangement of tools.

For more information on the origins and history of the project, please visit energylanguage.org