2008-11-11

Time as connective tissue

An expansion of applied synchronicity. The principle can be applied more widely, premised on logging timestamps on the following

- audio file playing
- web page focused
- image(s) focused in viewer
- phone # active on phone
- email message currently focused

Voice recognition input provides metadata for one of the above resources, in a format like::

audio put artist bob and ray id hard as nails tag interview tag staply

meaning

- audio = determine currently active audio file
- put = create a new chunk of data about the current audio file
- artist = active field of chunk is 'artist'
- bob and ray = key:artist, value:bob and ray
- id = create a new indexed entry key:id, value:'hard as nails' pointing to this record
- tag = append interview and staply to the list of tags on this audio file

or::

image put category sunrise tag frost

- image = determine currently focused images(s)
- put = create new data for these image(s)
- category = active field of image record(s) is category
- sunrise = key:category, value:sunrise
- tag = append frost to the list of tags on this(these) images

The magic is in *currently*. We know the timestamp of the voice, we've stamped it's begin time, the file knows time offset at any point.

We parse the logs to determine the correct association to make with the voice commands, the logs tell us what audio, web page, images ... we are referring to.

I think there is real potential here, the voice recognition demands are not too great, controlled voice and vocabulary, the logging and persuant matching up seem fairly doable.

Applied synchronicity

I'm relishing the brilliance of Bob and Ray, listening to a collection of mp3's: 5 days and 22 hours of them.

They should be indexed, I don't have 6 days to devote to the project.

How could I multitask, do the indexing efficiently in the background?

It seems it could be done via time-based data matching. The audio player logs timestamps with filename as it plays the set of files. I record voice messages describing what skit is being played, these voice files are timestamped. An application matches words in my voice recording with the index into the file being played.

The ideal would be a wearing a bluetooth mic, speak into it to record an index into the currently playing file.

A more accessible start towards that goal would be to provide the index info via keyboard instead of voice.