An expansion of applied synchronicity. The principle can be applied more widely, premised on logging timestamps on the following
- audio file playing
- web page focused
- image(s) focused in viewer
- phone # active on phone
- email message currently focused
Voice recognition input provides metadata for one of the above resources, in a format like::
audio put artist bob and ray id hard as nails tag interview tag staply
meaning
- audio = determine currently active audio file
- put = create a new chunk of data about the current audio file
- artist = active field of chunk is 'artist'
- bob and ray = key:artist, value:bob and ray
- id = create a new indexed entry key:id, value:'hard as nails' pointing to this record
- tag = append interview and staply to the list of tags on this audio file
or::
image put category sunrise tag frost
- image = determine currently focused images(s)
- put = create new data for these image(s)
- category = active field of image record(s) is category
- sunrise = key:category, value:sunrise
- tag = append frost to the list of tags on this(these) images
The magic is in *currently*. We know the timestamp of the voice, we've stamped it's begin time, the file knows time offset at any point.
We parse the logs to determine the correct association to make with the voice commands, the logs tell us what audio, web page, images ... we are referring to.
I think there is real potential here, the voice recognition demands are not too great, controlled voice and vocabulary, the logging and persuant matching up seem fairly doable.
2008-11-11
Subscribe to:
Post Comments (Atom)
1 comment:
viagra rrp australia cheapest uk supplier viagra cheap viagra walmart viagra online stores herbal viagra buy cheap viagra online splitting viagra can viagra be used by women buy viagra online at generic viagra cheap viagra mexico free sample viagra cialis v s viagra viagra strips
Post a Comment