You must be logged in to post Login

Lost Your Password?

Search Forums:


Wildcard Usage:
*    matches any number of characters
%    matches exactly one character

Input Interface

No Tags

2:50 am
April 26, 2010



New Member

posts 1

I am a little bit lost in the moment, I have modified an existing sensor and it is giving me with 3D coordinates of objects in its range. the dataformat is quite simple (x, y, z). I guess this would be kind of ideal to create a 3D multitouch. The only thing I am missing is a server that can interpret the data.

Is there any chance to make this work within Bespoke Multi_Touch Framework. As far as I can see from the online descriptions the only way to get "data" in there is a camera feed or images, right?


looking forward to see any kind of response

6:22 pm
April 29, 2010



posts 49

Hi tabasco,

It might be possible to make this work with the Bespoke MultiTouch Framework. You're correct when you say that the data used by the framework is video data — typically captured by an infrared camera. Moreover, this data is 2D not 3D. So, I would speculate that you'd have to integrate data capturing from your 3D sensor and then roll your own 3D "data" processing system. I say "data" processing system, because you mentioned that the data coming from your device is in an x,y,z tuple — not necessarily an image. My system is detecting points of contact within a 2D image — but if you already had interaction "points", then you may not require the notion of detecting such points yourself. Instead, you'd more likely have to massage that data to determine what indeed was an intended interaction point and what was chaff, and then what that interaction "meant".

An alternative (might be a bit off-the-wall) would be to convert the 3D point data into a 2D image for integration with the existing image processing pipeline. The existing 2D image processing pipeline is described, in detail, in a paper I wrote which can be found at…..110523.pdf. But this seems a little funky to me, since the idea of the image processing is to "decode" interaction data, and I'm suggesting that you'd be "encoding" data from your sensor to just turn around and decode it. I'd bypass that middleman and just consider how to interpret your 3D point data. For instance, are you wanting bona-fide 3D interaction? Or, are you building the notion of a 2D interaction plane within 3D space — and thus an interaction is only triggered when on/near that plane? How is the point data generated? i.e. your fingers? a gloved-hand with active or passive light emission/reflectance? Meaning, do you have to trigger the interaction? i.e. a button press? Or are "fingers" detected continuously and you have to decide what an interaction is.

In 2D multi-touch, interaction is generally detected when you touch a surface. The same thing doesn't have to apply in 3D. By the way, there are literally dozens of academic papers on the topic of 3D user interaction. Indeed, there's an entire conference dedicated to the topic (3DUI —…../3dui2010/). Hope this gives you some ideas.





No Tags

About the Bespoke Software forum

No Comment

Comments are closed.