2008-11-11

Time as connective tissue

An expansion of applied synchronicity. The principle can be applied more widely, premised on logging timestamps on the following

- audio file playing
- web page focused
- image(s) focused in viewer
- phone # active on phone
- email message currently focused

Voice recognition input provides metadata for one of the above resources, in a format like::

audio put artist bob and ray id hard as nails tag interview tag staply

meaning

- audio = determine currently active audio file
- put = create a new chunk of data about the current audio file
- artist = active field of chunk is 'artist'
- bob and ray = key:artist, value:bob and ray
- id = create a new indexed entry key:id, value:'hard as nails' pointing to this record
- tag = append interview and staply to the list of tags on this audio file

or::

image put category sunrise tag frost

- image = determine currently focused images(s)
- put = create new data for these image(s)
- category = active field of image record(s) is category
- sunrise = key:category, value:sunrise
- tag = append frost to the list of tags on this(these) images

The magic is in *currently*. We know the timestamp of the voice, we've stamped it's begin time, the file knows time offset at any point.

We parse the logs to determine the correct association to make with the voice commands, the logs tell us what audio, web page, images ... we are referring to.

I think there is real potential here, the voice recognition demands are not too great, controlled voice and vocabulary, the logging and persuant matching up seem fairly doable.