[GRASS-dev] Fwd: 3D Human Interface & Control Sytems

Wilfred L. Guerin wilfredguerin at gmail.com
Mon May 21 22:34:08 EDT 2007


attached file to read from academic k-12 curriculum

---------- Forwarded message ----------

It would be good to review the simple technical specs for both 3d
interface systems and extrapolative modeling from ballies.org (now
downed?) as txt ATTACHED or mirrored at http://remedials.org/#11
"Script for Overactive Professors" which describes an easy method of
using a normal ball and webcam for 3d interface and "pelotatics" --
camera vector and 3d recomposition using pelotas (balls) ... This is
extremely easy to implement and should be a standard interface for
grass/etc and preferably all gis/engineering/archi systems in general.

Please feel free to contact me about the implementations we have
created based on this WilfredGuerin at gmail.com

the txt is not attached due to list restrictions...
-------------- next part --------------
Ballies.Org Script for Over Active Professors.

This Script / guide is intended to standardize student introduction to Ballies.Org and provide advisories for enhancements to the presentation synchronized across language and contexts.

Lecture content is plain text, script events [] notated in standard context, and recommendations on affect and influence as ().

(It is likely beneficial to go down to your local BurgerMartPlace and borrow from them a large number of their PlayPlace plastic spherical balls. Hopefully by the date of your presentation, the global venues will have both a standard "lease" option on the catering menu as well as the option to order through their standard purchasing program a certain number of standard units. Might even come packaged with a meal or cookies. SuperBall, bouncieball, etc are commonly available from coin-vendie distributors with a maximal per unit cost significantly less than in the retail machines. Gum (candy) balls may not be regular in shape, any selected object MUST be of solid contiguous color. Inflatable balls must be rigid hulled, deformation due to gravity is bad. Balloons tend to not be spherical -- leave them for student suggestion later. Foam balls (or modified toy strut pieces) are best reserved for rigid body structure discussions, distributed then, or in handout package. Most fabric, hobby, and household decoration stores have overpriced stock of various sizes of foam balls. Foam melts if you paint it with wrong type of paint, also generates hazardous fumes, select accordingly. Holiday lights of spherical design, pingpong balls with standard bulbs inside, all options for luminated units.)

=====Ballies.Org=Presentation=Guidelines=====
 [BEGIN SCRIPT]

Well, the basic concept of Ballies.Org is quite simple, hopefully this introduction will help to guide you in effectively playing with your Ballies.Org.

(welcome statements, on-behalf-of, sponsor acks, you should be recording video.)

A Ballie is relatively simple to comprehend.

A Ballie is normally round, uniformly spherical, of a contiguous color and material, and of measurable size.

A Ballie appears large to you when it is on your nose, and appears smaller when far away.

Ballies are easy to see, usually visually distinct, brightly colored, and they could even light up or glow.

Ballies are rugged, care-free and infallible, and retain a spherical shape under all conditions, regardless of where they may go.

 [TRIGGER BALL DUMP]
(A long shaped disguised box with open top or actuated lid exists behind the speaker and above the walkway and stairs at the rear of the audience when large quantities of Ballies are available, podium presentation uses shoe-box size container also to dump upon stage. Helper, rope, or alternative actuator causes simultaneous dump of Ballies from all containment vessels simultaneously. Safety issues should be neutralized immediately by crowd participation, allocations of one of each color ball to each individual when available, one ball each, or otherwise instigates both participation and social interaction. Ensuring each individual gets one of each facilitates natural selection of social roles in the community. In formal environments with a bag per seat, provide only 3 balls when 4 colors are available; this forces breakdown of isolationistic tendencies without disrupting morale, each individual has a result set of all colors.)

(SuperBalls, bouncieballs, and related are appropriate for audiences who don't mind being bonked in the head upon dump and in situations where potential safety naivete is of less concern. Gum balls have potential risk of distraction, inclusion in the handout bag is more rational. Inflatable balls must be rigid hulled (gravity deformation minimal).)

(foam balls are best distributed in handout or after rigid-body structures are discussed. Foam balls DEFORM when impacted, be leery of dropping or squashing, holes stuck in them add up to mutilation.)

Ballies /come/ in many sizes and colors.

It is best for everyone to have an equal selection of Ballies now.

 [FACILITATE BALL DISTRIBUTION, SOCIAL INTERACTION]
(social anxieties remedied, equal allocation of balls)

As we see, the Ballies are quite uniform, of solid color, and easy to handle.

===In=Pictures===

(especially effective when you have motive video (roaming camera) displayed behind the speaker, allow crowd panning, zoom on ball playing activity, handle balls in front of camera, etc. familiarizes appearance of balls in video/display)

As you see, an equal sized ball appears large when it is close to you, and smaller when it is far away.

 [wave ball] (camera team imitates ball motion relative lens)

Now how will we find our Ballies wherever they may go?

We see lots of Ballies, but how do we know where they are?

(if a beach-ball or large standard ball is available, present on stage, ideally, beached error ball will be distinct but also visually the same perceived size as normal ball held comfortably in hand mid-audience)

 [PRESENT ERROR-BALL OR GIANT STD BALL ]

Is /this/ "ball" in the same location as your Ballies?

But it LOOKS the same size!

 [Present tiny ball or bead]
(video zoom on tiny ball held by speaker, match size of giant ball, normal ball, tiny ball in video same frame)

Obviously, identification of each specific ball would help, especially if you know the size or identity of it.

Knowing the lens and zoom characteristics of the camera will also help.

 [PRESENT MEASURING DEVICE]
(visibly measuring tiny bead with giant measuring stick helps. clearly marked meter sticks are useful here. calipers/etc)

It appears this (small ball) is XXX microns, same as what it was in the electron microscope... (presenter customized)

Which is about NN millimeters.

Lets find out our camera characteristics.

 [] [] [] [] []
at 1 meter distance, the camera's frame is xxx millimeters wide, yyy millimeters tall, and has a field of view of aaa degrees, or bbb,ccc,ddd degrees from the center to edgex/y/corner [] [] [] [] []

(if a flatbed scanner is connected to video device view)

Lets get a good measurement of the balls. Small ones can go on the flat-bed scanner, using native resolution, and known pixel size, we should be able to measure it?

We can do the same with the cameras and larger balls, place them with a measuring stick and derive the size using the known camera FOV and image measurements.

And of course, here is a caliper, meter stick and two T-squares (or a piece of rectangular paper with "T" label), and a flexible measuring tape if you know how to divide by PI.

 [Present tools to volunteers]
(keep cameras on workspaces)

So now, we know how big our Ballies are...

 [Present sizes in mm to board/spreadsheet/etc]

We'll label the Ballies somehow, and keep track of our manual measurements.

But wouldn't it be easier just to have the machines measure the Ballies for us?

Lets look at the scan of the small balls.

 [Image presented]
(with multiple video, big ball with normal balls on stage, crowd view with visible balls, scan on main screen, measurements table visible)

In this image, all of our Ballies are perfect circles!

Obviously, if we measure the XY major axis, and they are the same no less, and we know the resolution of the scanner and physical size per pixel, we can compute the size of the ball?

 [Manually find xmin, xmax, etc using pixle(x,y)]

Wow, we can even find the center of the circle this way.

 [DO BASIC MATH TUTORIAL]

Shouldn't the 45' angles also be the same radius from the center point?

Yep, it appears to be a circle!

So, now we know the exact size, even location of centerpoint in this image, of all our small balls!

 [COMPLETE MEASUREMENTS]
(if students have workstations, post data to server)

But, wouldn't it be easier to make the machine do this?!?!

What about our accuracy too? Almost all of the images and Ballies have fuzzy edges!

Can't we find the actual edge of the Ballies and have better accuracy?

How much impact would half a pixel error have on our task of finding Ballies?

 [DISPLAY PIXEL ERROR IN BALL EDGES]

(Depending on intensity of group, discuss sub-pixel edge determination, circle edge tracing and normalized center point, etc.)

Lets automate this process. 

Step 1: Find X and Y extremes of a circle in an image.

Step 2: Given we know a perfect sphere will always be a circle in an image, we will optimize our technique of finding as best possible.

Step 3: Add some extra methods, like optimized sub-pixel boundary finding, faster searching, and effective confirmation of circularity by probing edge locations.

Step 4: Make this work with real images!



So, lets start simple.

We know the ball is around point X,Y in this image.

Finding the leftmost edge means tracing left while the color is contiguous.

If the lighting is normal, shading is a problem, so we can use the proportion of each color channel to the next to see if the color .... is the same even if the amplitude or lux intensity is not. Test R:G:B(a) to R:G:B(b), or, basically see if one pixel's r:g is the same as the next. You can use BW or grayscale images too, but you might end up a drone in the army that way.

If we didn't start exactly at the middle of the circle, we know the midpoint of our line will be the intersect of the other axis.

Trace left, trace right, find the locations of the edge of our color, then trace up and down from the middle, then go back to the horizontal axis and use the midpoint of the vertical extremes to find the max horizontal.

We can always search for the sub-pixel exact location of the edge on each extreme, and even skip most of the tracing if we know approximately where the edge should be and can confirm such.

Given we see the entire circle, we don't have to worry about obfuscated sections.

To make sure it is really a circle, we should trace out the four 45' radials, given it is a simple (X+1,Y+1) line, and even skip some of the trace if its likely to be the same circle.

With these 8 high accuracy sub-pixel coordinates, we can easily find the closest centerpoint of the circle and an accurate radius or diameter.

 [RECAP]

Now we know where in an image a circle is.

But we want to know where our Ballies are!

Assuming we measured our Ballies correctly in the first place, and using the known characteristics of the camera lens, shouldn't we be able to find the center of our Ballies in 3D?

If the camera has a regular lens, then we know the vector of any pixel from the camera.

Similarly, we know the vector from the camera to the sub-pixel value where the center of the circle is.

If your camera sucks, and its lens is not regular, see the methods of correcting skew and lens error provided elsewhere. Make sure it doesn't change its FOV or auto-zoom and mess your data.

Using the FOV angle, computed angles from the meter stick calibration, or a manual probe of the xy location from the meter stick image, determine the real 3d vector of the centerpoint of the circle.

You now know in what direction the CIRCLE is from the camera.

But obviously Ballies are more interesting.

You know the visual size of the Ballies in your image, you know their actual size, and you think the ball and the circle are the same color, thus you think you know which ball is represented.

Based on the FOV angle, the Ballies' 3D location is simply a solution using the measured size, vector angle, and real size.

 [Demonstrate further...]

You have now computed your Ballies' 3d location.

But now that you have determined where it /has been/, where is it now?

Modern technology can easily process this video in realtime.

Let's visualize the ball and camera in 3d.

 [Show 3d view of camera with lens vectors and ball]

Next, we automate the process of ball finding and tracking.

If you know where the ball might be, you can attempt to start at that location and find the representation at current, or you can search the entire image to find them.

If your camera is in a static position...

 [mount cameras various places]
( two pointed at stage overlap speaker, one pointed from behind speaker toward audience, etc...)

Then it is likely you can take an image of the background and subtract it out each frame, making it far faster and easier to find your Ballies.

 [Remove all targets and humans from scene, get bg image.]

Now we simply search the image for Ballies!

If the pixel data is not the background, trace through it and see if it is a contiguous circle. 

We won't worry about optimizations now, just find every instance of a perfect circle!

(trace incrementally, if a circle, write to list.)

Obviously, filtering out error is good, and sorting the list to remove duplicates is nice, and possibly eliminating noise (less than 1 pixel diameter) would help for now.

Getting the average color of the ball will assist in identification.

Once a ball is found, lets show it in 3d in the rendering.

 [prepare and confirm software process]

Lets add a ball to the view.

 [add ball to one camera]

Does it show correctly in 3d?

-- if not, make sure uniform ball size is currently used.

 [add ball to one other camera]

This one works too.

 [RETURN TO PODIUM WITH HAND FULL OF BALLIES]

 [PLACE BALLIES ON PODIUM IN VIEW OF ALL CAMERAS]

(please test you camera resolution and placement for this scene onward)

(OBLIVIOUS TO PODIUM BALLS ON SCREEN)

Well, we've succeeded in finding a ball in a picture.

What about multiple balls?

Should it be possible to find ALL your Ballies?

(Ballies can be tossed into camera view, if they aren't  being already)

(hopefully the 3rd camera sees balls in audience? ;) )

 [Notice Ballies already found...]

Well then, hehe, lets make use of them!

===Rigid=Bodies===

We know where our Ballies are, apparently, so lets make use of them!

We will create some rigid body structures to assist us in utilizing the capabilities of our Ballies.

To perform a Rigid Body computation, one must simply have an awareness of his Ballies and utilize them to form a Rigid Body structure.

Lets take two Ballies, and tie them on a stick.

Hopefully you'll  see the point.

With a known orientation of the stick with Ballies on either end, and a known distance between them, a Rigid Body can be identified computationally and thus utilized more effectively.

Here one ball and another ball are attached with clear tape to a stick (pencil, etc, depending on size) and the distance between them measured.

Wait! didn't we already make a tool to have the machines do all the measuring?

Fine then, two balls on a stick shown to the camera.

It says one ball is at (coords) the other at (coords)...

Lets then find the difference between ball centerpoints.

 [DO (use table's math)]

And just to make sure, the ruler says nn millimeters, machine says nn.abcdefg millimeters. Cute.

Spinning the Rigid Body around, just to guarantee accuracy, the smallest value is AA, largest distance BB.

So this rigid body has a certain range of observed characteristics, thus we can identify it.

 [add rigid body to database]
(visualization should show line based on ball location)

The Rigid Body Structure easily both helps in identification as well as utilization of our Ballies.

One red ball, one blue ball, lets give them some direction, students, which one's on top?

(aside):
Yes, Nina, you can do two red ones.

So it points this direction?

 [Confirm directionality and pointing vector in rendering]

So lets use them!

Everyone make your temporary Rigid Body Structure with two Ballies and come register them with the camera...

(rigid body registration sorts vector data based on proximity and Ballie characteristics, in large groups with a standard pencil, expect collisions!)

 [FACILITATE WAVING RIGID BODY STRUCTURES AT CAMERAS]

Now that we all have a pointer, lets do something constructive, like identify our interface...

Draw your name or a symbol in 3d, lets practice once...

 [DRAW "BOB", SEE TRACE IN 3D]

I(n large groups, this should fail, false identifications or obfuscation should generate collision and error.)

Obviously, though our Ballies are infallible, our implementation is erroneous.

There were collisions between A and B, for example, at long range the similar Rigid Bodies appeared the same or were misinterpreted as another.

Since we obviously don't need the ruler any more, lets add another standard ball to each of our Rigid Bodies, make sure it is attached well, and re-register them with the cameras.

We'll add distinctive identification to each as well.

 [ATTACH 3RD BALL (strange angle), REGISTER, DRAW REAL NAME]

(facilitate students to do same)

If your earlier structure was not visible, change positions as necessary, add and calibrate additional cameras.

 [GUARANTEE ALL CAMERAS AND VISUALIZATION SYSTEMS ARE CALIBRATED ACCURATELY]

If your webcam or cell fone or other imaging device is not correctly calibrated, do so now and synchronize with the server.

(at this point, I have had rooms with close to a thousand cameras and hundreds of users all working collaboratively and simultaneously. hundreds of humans on 2 cameras as well. Welcome to 3rd world education.)

Lets remove any additional obfuscations from the imaging environment as well, and cross-calibrate our cameras...

(room with chairs, optimized views of cameras from users, good visibility of best screen for all)

===Collaborative=Calibration===

We now have all available imaging devices (even if it is 2 cams) arranged for effective (although not optimized) viewing characteristics of our Ballies, and we are all in a camera visible location with our associated Rigid Body.

We can all be seen and see the screens, right?

 [fix]

The first thing we should do to customize our independent interface together, is to identify our spaces.

We can either use the real 3d environment here, or we can use a relative environment as determined by additional rigid bodies.

If you want to use a relative space, at least two balls will be needed for comfortable use, which will reduce your available interface structures. 

(if you have 5+ balls each (rainbow) it works well, as does shared relative spaces, don't forget an extreme case with ... Lotsa balls)

The most effective RB locations for interface seem to be shoulder-mount, temporal (glasses), and hip mounted, depending on your manner of interface.

(lots of clear box packaging tape is useful. telling students to wear black or non-ball colors helps for optimization. large black garbage bags make good outfits. NOTHING SHINY OR REFLECTIVE)

You can recalibrate, change, and modify your interface characteristics at any time. Just be sure to tell the machine.

 [Calibrate and confirm visibility of structures]

Now that we all have our own Ballies.Org to play with, lets work together to optimize the system to help us help ourselves.

First thing, of course, is basic calibration of our interface. We will be working together, so we will be using the same view.

Lets all point comfortably at the lower left corner of the screen we are using. Hold still.

 [Trigger calibration sequence.]

At any time you can go over to the recalibrate system and design a new interface. (this is the view window at top right)

If you get totally lost, get someone in physical proximity to help rescue you. (button middle right)

 [FOLLOW CALIBRATION INSTRUCTIONS]

Now that we are ready to go, lets back through the sections of development that you think need the most work or optimization, and idealize the entire system ourselves.

When we have completed this task, we will have created the entire Ballies.Org environment ourselves.

Please point to the section of this project you think needs the most work.

 [JOIN AUDIENCE]

(At this time, the analytic engine takes over and everyone works collaboratively with a voting style protocol that gradually works through system design and optimizations, result is an ideal system design and comprehensive familiarization with interface capabilities.)

 [If there are time constraints, notify of such now, also advise that their project is available via Ballies.Org subdomain or project ID, and that their own equipment may be used at any time from anywhere. Remote collaboration is encouraged, and continuation to the next phase of development will continue next session.]

(If next session is literally next, here is your lunch break. They will need time to adapt and idealize their interfaces before exploring 3d extrapolation.)

DON'T FORGET YOUR OBFUSCATED BALLIES, PROCESS OPTIMIZATIONS LIKE BACKGROUND REMOVAL AND LUX THRESHOLDING, ETC.

(Again, I find dressing everyone in black (not shiny) garbage bags and allowing the Ballies to be hidden for Background image sequence acquisition, allows not only a fast optimization of image search with known subtraction (head and hand motion is blurred in bg image) but also a psychological advantage.)

==================================

Now that you have optimized your Ballies.Org processes, had a snack (hopefully), and possibly built some idealized 3d interface devices and fixed up your Rigid Body Structures, its time to quickly review the resulting system and then work together to put it to use!

(a table/bin of constructable components could have been provided at lunch, along with food, to assist the individuals in creating a physically solid optimized rigid body interface possibly with dynamic variable controls (ball slides on stick), and possibly if resources exist, you have outfitted at least the higher performing individuals with luminescent higher-accuracy body tracking structures and allowed everyone the creation of tools for interface)

 [REVIEW SYSTEM]

The biggest problems of speed and accuracy have been addressed, and you have succeeded in convincing the Ballies.Org server to give you the optimal system for this pursuit, no less you comprehend how to go farther on your own.

If there were recommendations on physical reorganization of camera locations or environment, these should have been completed and recalibrated by now.

Its time to take our Ballies.Org beyond the abstract and fully into a higher dimension!

===Reality3D.Org===

Ballies.Org, as you know, is an introductory primer for Reality3d.Org, a comprehensive protocol for environmental simulation and analytic modeling.

Toward "A Higher Dimension of Reality; Reality3d.Org"

Now, before we get into excessive modeling and simulation of whole-earth physical systems, lets cover the basics with Ballies.Org's 3D extrapolation primer.

We have Ballies!

We know where they are, can track them pretty well, have devised methods of human to machine interface so long unknown to ...(word meaning non-drone).. kind. 

We drew identificative symbols in 3d, and designed our own highly optimized physical interface devices.

We wouldn't be here if we didn't want more!

Lets make a quick effort to draw something in 3d.

It could be a star or cloud or words... doesn't really matter. Just to make sure your interfaces are working correctly.

 [DRAW SOMETHINGS]

Now, if we can draw something simple, can't we draw something highly complex like an engine or entire industrial factory?

Sure! But if it already exists, why not just take a picture of it?

 [TRIGGER EXTRAPOLATED ENVIRONMENT DEMO]

(with at least two cameras, even 1 if necessary, the system should show a 3d structure of your environment based on current snapshot view.)

Here we see... us... our Ballies.Org, basically everything here in our room or environment.

Where did that come from?

Well lets do better and figure it out ourselves!

Hint: it came from the camera just now.

 [Return system to guided interface.]

How exactly did the cameras calibrate themselves based on synchronized watching of Ballies' motion?

Moreso, how did the system pull a 3d model of everything here with just one set of pictures that quickly?

You might have said spatial imaging... but spatial imaging is futile when you have no scanning target and you already know the exact location of your Ballies in 3d from any one image...

Well, we can try using more than one image to find that our Ballies are in the same place, or we can try to extrapolate vector data directly from multiple images, or we can do any of 20 other various methods of 3d extrapolation... 

Obviously we are using Ballies.Org for a reason.

The Problems:

Our estimation of Ballies' location is based on something that resembles a circle in one flat image.

We had problems with hidden Ballies, and had to check for arc edges to estimate the location of partially hidden Ballies, and never really extrapolated any 3d data from the images, we simply estimated the 3d location of the ball as derived from our data sources.

Obviously, we need to actually identify and confirm our Ballies exist in 3D rather than potentially being a ghost circle of paper perfectly tangential to our camera.

===Ballies'=Extrapolation===

To start, lets list what we know about the real Ballies.

What we see is a flat pixelated image of awkwardly shaded pseudo-circle shapes.

We have confirmed the X and Y axis in the image of the circle to be somewhat contiguous in color, and have done the same with the four 45' rays.

We have found an accurate perceived edge boundary and sub-pixel tangent intersect location at 8 points, and a corresponding center point.

We have traced partial circles and derived potential characteristics of such by measuring their arc.

We have NOT confirmed the existence of any sphere, nor even attempted to do so. We fail to see balls that are hidden.

What we do have:

Approximate data of potential Ballies' location, size, color, etc.

Known vector corresponding between any Ballies and camera where visible.

Approximations based on this of proximity between speculated Ballies and list of related Rigid Body structures.

Points on a flat plain that correspond to color variances somewhat related to a perceived circle.

Vector computations that provide known camera to pixel ray vectors.

===The=Hard=Part===

We need to determine something finite; we have pixels, FOV data, approximate circle data, derived speculated Ballie data.

How do you know you are not looking straight through a hole at a flat wall?

Well, lets put some Ballies out in a neutral environment with background removed and use 2 cameras.

We'll ignore lighting speculation for now and leave that for the reverse raytracing section.

So what do we see?

Got one picture with some circles in it, and another differently angled picture with some circles in it.

To make it simple, there are 3 ways to deal at this point.

Correlate pixel data after normalization to create intersection "Blobs" and filter for contiguous hull structures to determine 3d shell locations.

Correlate circle projections using data from edge tracing and boundary estimates after normalization to determine 3d edge locations.

A fourth method that is simply irrational without significant induced error and filtering that takes hours per image and uses complex matrix normalization and cross vectors to extrapolate 3d structures; even an optimized method without reference point data is slower than Ballies.Org demos.

Or, more likely, use speculated Ballies to find approximate 3d camera locations through a simple normalization process with a basic matrix or linear function and then go back and find the exact locations of cameras and Ballies which can be easily used to perform optimized vector correlation comparisons using both extrapolated boundary data and pixel ray extrusions.

Ok, so, this means exactly what?

Just find the vector between the cameras and use an average center point as a normalized origin, then fix location data of Ballies to highest possible resolution given available data, then go back and extrapolate everything else using a combined method!

===Methods=of=Extrapolation===

Once you know the exact and normalized location of all of your cameras (yes this is ONE camera taking multiple images of a static environment), you can then quickly run through and extrapolate precision locations of your reference spheres and perform calibration processes like color correction.

Once you have maxed out precision available from your reference spheres (now known technically as Ballies), you can trust the maximal precision derived for the camera characteristics. This would include compensation for lens or environmental distortions (including vibration or image compression and shutter rate error) and other otherwise erroneous hassles.

Once this ideal (within scope of data resolution) camera location is found, the fastest (and comprehendible) mathematical process is as follows:

Compare each ray or trapezoid extrusion from each pixel to all others it intersects. If the coloration is close, there is a potential that it corresponds to a real 3d surface. Assign a weighting and quality control value to each characteristic, and identify the origin vectors and range for each "Blob" produced. Later, cross all Blobs with others it intersects, merge and refine weighting data. If one blob is one color and another significantly different, they can not possibly occupy the same space; split them into the intersections and not-intersection Blobs. At a certain threshold (dependant on data), spurious Blobs will be eliminated as the data set is filtered and refined. Contiguity of color and hull structuring across contiguous refined Blobs gives potential surface structures. Identification of further obfuscations and reduction of internal solid's Blob remnants gives surface structures at decent accuracy. NOTE: HERE YOUR PIXEL BLUR IS PROBLEMATIC. Returning these contiguous color structures (using Blob based identifiers) location back to the original data set images allows for ideal sub-pixel analysis to find the boundary of the visual contiguity using actual colors and projected proximities from the complex 3d model. This boundary finding constructs the most accurate 3d model (using function generators) of the solid 3d data set WITH the best possible pixel edge identification to create and project the most accurate bounding function per contiguous object using all possible data.

These boundaries, not point structures, provide the best finite components of the 3d extrapolation. The combination of these boundaries with their speculated filler data and Blobs allows for reverse modeling of lighting characteristics, determination of light source vectors and characteristics, and thus the creation of mathematically accurate shell data using both Blobs and extrusions.

Note: it IS possible to first find boundary estimates in the images and attempt to project them and test color contiguities, and IS VERY possible to optimize these methods significantly, however aside from producing better speculated shell structures on which to project pixel data, the localized gains are limited, there is a loss of potentially indeterminate boundary data through blur, and the idealized correction to these analysis is functionally similar to extreme optimizations and merger of the above recommended method's third step but can NOT handle realtime or sequential addition of images where the above can apply the boundary finding and reverse boundary analysis (and reverse raytracing) per each additional image.

The correct result would be a consolidation of boundary structures and their own generator functions, independent of other boundaries on the same object and intersecting, and a render shell created from these functions. However, of course, the construction of these boundary traces sufficiently accurate enough to represent the shell structure would require exponentially more diverse images including angles that are not possible in physical reality.

The use, especially in natural light situations with natural materials, is significantly faster and more accurate resolutions using light modeling and pixel extrusion intersections with boundary shell estimates derived from both sub-pixel and shell centric extrapolations.

So, in short, you can endlessly analyze pixels to extrapolate boundary potentials and then never actually generate 3d data with continuous sampling, or, extrapolate and utilize 3d Blob data from the first image in with constant refinement from multiple simultaneous realtime data sources.

In scenes, a transversal camera, especially if autonomous, will increase significantly the localized data resolution based on its local proximity while continuing to facilitate the creation of functionally optimized boundaries per each image. Without at least basic Blob surface structures in realtime, your bot will fall into a black hole rather than explore the cave. 

...

Using edge structures like the spheres to sector and partition blocks of image data for localized processing provides opportunity to reduce out 3d structure error derived from obscuring objects and provides for partitioned attention regions in 3d to be processed more efficiently.

===The=Simple=Part===

With that said and likely completely understood, it is most effective to find Ballies, use them to find better camera vectors, normalize, then use a fast boundary process extruded from the edges of the spheres' circles to find the most exact location (normalized) of the sphere itself. 

Once the best Ballies data is found, and the camera location and characteristics recalibrated, use extruded trapezoids or another more effective color extrusion method *** to generate Blobs that can immediately be utilized, use these to find color irregularities which indicate a potential finite edge boundary. Once Blob modeling is done, possibly with inline filtering, the approximating 3d Blob data closes on (n*n)/(n+1) times the resolution of any image #n when views overlap, or better. The addition of boundary finding and pixel color analysis reduces the data set size significantly with curve fitting of sub-pixle=eq=BlobDataEdges and thereby removes data mining requirements for autonomous transversal systems and allows function based environmental awareness modeling to approach a maximal efficiency for civilian imaging tools while allowing 3d route planning and imaging planning to use mathematical functions to speculate terrain clean enough for the most simple of embedded systems.

***Please note, sub-pixeling here to generate "potential" extrusions is more time intensive than projecting the pixel and generating a Blob when realtime (especially for reconnaissance robots) 3d model data is required for transversal.

===Forget=Simple=Just=Do=Now===

To make this as hard as possible:

A pixel, based on a camera's Field Of View, is a known trapezoid extending from the origin of the camera toward a known distance.

Although it is possible to use more complex shapes in this extrusion than a simple square, we will not use them now because it is slower and far less effective in any implementation on the technologies we have available. We will reverse model these shapes and then return their vector to the image to further optimize them.

If your eye is looking that direction and mine another where something visible is overlapped, the existence of one ball or brightly colored object in a grass field will persist in only one location in either eye.

Although the view of the object is different from each eye and the visible sections may never totally overlap, there still exists a bright color at a certain vector.

The perceived edges of either view, extruded, will correspond with a surface in most situations, of the other view. We must know the location of the solid object's hull first.

By examining the intersects between intersecting trapezoids extruded from pixels, testing for correlations in color, and generating a 3d solid Blob if desirable conditions exist (note when using fast sorting hardware like modern rendering chips, generating all possible blobs and then sorting them out is physically faster), we construct a 3d object that may represent a mutual visible surface of the object.

It IS possible to use larger sectors or multiple pixel wide intersections of contiguous characteristics to test affinities, however this method must be meticulously optimized to avoid holes and accommodate depth-stacked objects like leaves. Start with single pixels, then explore larger sectors at the same time as boundaries.

The Blob is placed in a database sorted by physical location and indexed by image source. The Blob has data for its sources, accuracy, characteristics, and later proximities and affinities to nearby Blobs. Each pixel could have infinitely many Blobs based on the total number of overlapping pixel extrusions, however, an image from far away crossed with a image from nearby the object will result in one extrusion being far larger and thus less accurate than the closer by the same camera. Sorting by data quality and accuracy later allows better extrapolation in the final model. 

There is potential that a small object that dims less than one pixel in multiple images will be sufficient to extrapolate 3d Blobs and thus coordinate boundary searching for it when pixel based boundary searches will never see it.

There is more potential to optimize all of these systems into one that can be implemented more efficiently in hardware ;)

...

You take an extrusion of the pixel along its irregular 3d trapezoid based on the camera location and see if it intersects any others. If it does, you create a Blob. There is a finite number of surfaces and vertices that a Blob can have based on its derivation from square pixels. Perhaps this makes data storage easier? The variable data, quality, origin/image id, and pixel values are also finite. Perhaps there are 5+ color values on each extrusion?

You can display the Blobs at any time using any rendering technique. Properly ordering the vertices' data allows for reduction to triangles for some methods or simple recursion for others. If you use hardware renderers, note that the same vector chip can do intersections and Blobs and sorting far faster than a CPU.

Once you have crossed all the pixels from all the images that actually intersect (hint), created Blobs for each, and tried to reduce bad data out for visualization, see how it looks.

You might want to keep the Blobs that don't match in complex scenes because the next process may find use for them.

To further refine the Blob data and attempt to optimize 3d structure identification, using 3d sectors and proximity, test to see if the Blobs overlap. If so, and their data correlation is desirable, break them down into each piece. Put this resulting data in another database and make sure only the smallest of pieces gets through to the final model (no further overlap exists).

Perhaps extruded pixel trapezoids are Blobs to begin with and the entire process is self-similar and recursive? The tree on the distant mountain will be available in 3d, so do not try to restrict the values in your data. 

The addition of images in realtime does not need to modify the standing data. The first layer of pixel intercepts is done with rays and pixel extrusions. Anything beyond is done with Blobs. Further analysis can be computed in parallel from whatever optimized Blob level is populated. If the blob says it is highly accurate from many image perspectives, this is a good thing. It still should not overlap any other Blobs.

Once a 3d model is created and boundary structures are identified for regional objects, the new images typically need not be crossed with others completely out of scope. Be sure to keep blurred edge data that can be useful in pixel analysis locally. 

When processing a circle of objects where no data overlaps the gap in a 'U' shape, it is likely that the introduction of another image that attaches these ends will find gross error in the locations of the reference objects. If this is the case, likely due to lens or number size issues, it is necessary to re-normalize the entire scene to fix these problems. The entire process can be redone if desired, however if there is potential to get additional data closing the gap before the reanalysis completes but the sector is needed by a physical device (AKA the one taking the picture) then a translation of Blob vectors based on heaviest weight (most accuracy) including the new data can be used.

When data from one sequence, completed, overlaps another sequence, realtime, unless it is exactly the same instant, it is best to construct the base data independently and then merge both levels. Lighting differences, finite changes in structure, ball movement, camera variances may all contribute to a completely different scene. Obviously merging all is ideal later, but deviations overtime (especially lighting) give insight into advanced physical qualities, materials, and time series models of footprints, residuals, and erosion. 

Use of solar tracking and lighting manipulations is useful for ideal color identification and facilitates error correction from shadow and similar.

===Data=Done=Then=What?===

Your Blob data is derived from N frames of images or video, enough to get a good view of your target scene from most angles. You then compared all pixels to all pixels and created Blobs. You then reduced and consolidated Blobs as best possible, merged analysis sectors, and created a contiguous model.

Lets do some math:

=======

30fps @ 800x600 @ 24bpp raw for 30 seconds.

30fps*30sec=900 frames.

Comparing each frame to each frame is 900*900 or so = 180000 FRAME comparisons

each having 800*600 pixels = 480000 pixels

480000*480000*900*900= Blobs.
X*y*x*y*numframes^2=Blobs.

Without reductions, min Blob size of 256 bytes, that's a fairly large hard drive. Assume 1k with the merged ones.

Of course you can do all types of optimizations... ;)

For speed, its 16 flops for lookup *2, plus optimized isect bounds at 20 flop per 8 verts, 32+160+write32 plus some... lets say 250 flops each blob reduction. Basically means WOOPS FUCKED.

800	x	
600	y	
256	flop	
frames	10	
		
isects	1.74182E+12	!???
flops	4.45907E+14	flops
ghz	2.5	ghz
		
timesecs	178362.7776	sec
mins	2972.71296	
hours	49.545216	
blobsize	280	b
size	454216.0034	gig.
		
retention%	0.0025	
stor	1219.2768	gig
	1135.540009	(or)
		
		
x	800	480000
y	600	480000
numcomps		
frames	10	2.304E+12
blobs	2.304E+12	!???
size	256	b
tot	549316.4063	gig
%	0.006	4:1 full use 1:1 pixles
all	3295.898438	gb
		
	0.006666667	
flop	256	
ghz	2.5	
secs	1474560	
mins	24576	
hours	409.6	
=======

Obviously, transfering a TB of data to a tiny little bot just to figure out where to go locally is irrational, no less attempting to send it Gigs of local reconaisance data when its the thing that took the pictures from the start. The point of having a device offload data is to make things more effective.

A typical chip used for on-board sequencing control has 16kb ram for both program and data, and has a simple math unit.

If it is given a set of vectors of where the local reference Ballies are, and its camera tells it where a Ballie is, then there is no excuse for it to not only know where it is, but also be able to determine where it should go next and determine its own scripting and mechanical control. 

How can it process a 3d landscape model and determine its best route? Quite easily, you sent it a basic function generated by the boundary and shell analysis of its local environment and allow it to plug in its local vector to Ballies and determine both its exact location as well as where to go next, then engage its routines to determine physical scripting requirements and actually go there.

These functions, compiled, are usually built on a 1kb standard, your system compresses the function generator and builds a error checking table, then the chip sticks its 3d vector in and can sequentially trace possible options of route using the same function.

===Woops=Fuck=Damn=You=Military=And=More=Fuck===

Well, the systemic bandwith suppression of american military incompetence against the world (its not our fucking problem if you can't make faster spy equipment) which historicly has limited digital systems to a max of 16 standard tv channels bandwith, also hits hard again here.

Last time this was looked into, using methods of analysis LEGAL as described above... was a decade ago.

The world should be more evolved by now.

===End=Fuckoff=Lecture===


===Using===Numbers===

The numbers for Blob raw data are well within conventional data storage capability now, but the computation for even 10 clean images exceeds the 12 hour tolerance threshold for distributed processing.

5 frames of 800x600 generates max of 15GB in 5 seconds or 1fps.

frames	sec	gbsmall	gb1k	MB-filter1k
1	0.05	0.125	0.45	1.22
2	0.1	0.25	0.9	2.5
3	0.3	0.75	2.68	8
4	1.2	3	10	30
5	5.9	15	53	150
6	35	90	320	800
7	250	630	2250	6000
8	1981	5050	18000	50000

With the tightest filter in 30 minutes maxes only 50GB of data, however 800 MB in 30 seconds looks clean enough to specify the threshold of distributed processing at 5-7 comparisons (8 frames) as seen by graph.

Blocks of 6 Images take 5 seconds or so, which still drops to 1-15 seconds xfer times on Gbit standard. 

Creating stacks at 2-3 comparisons max is suitable for 10mbit and lower dsl and cable clients.

Please note Ballies are pre-processed in realtime at instant of acquisition, precise location of camera and Ballies is continually optimized as additional images come in, and the full scale post and intermediate processing described here is only supplimental to realtime extrapolation.

The biggest problem:

===Rational=For=Ballies===

As Indicated, the biggest problem for distributed computing and 3d extrapolation is the coordination and scheduling of data processing.

Using a matrix transform of pixles similar to the designs of supercomputer arrays generates Terabytes of data in as little as 10 800x600 images. The overhead processing is great for accuracy in a single pass, but obviously any realtime system must be designed differently.

Prioritizing and ordering data is critical for effective processing.

To attempt brute force comparisons to FIND your camera and target location is irrational.

Contrarily, attempting to merge potentially skewed models from low density comparisons (like less than 3 images) generates excessive error unless something correlates the references accurately.

Thousands of images may be taken of Ballies in one static location, and only 3 images analysed at any one time, but they ALL use the exact same infallible spheres as their references.

The result, is that the entire world of Ballies can be modeled quickly and abstractly and then the sorting, calibration, and image processing can be distributed effectively to idealize camera and view locations.

Using Ballies, one can quickly determine exact image vectors, resolutions, and scope immediately, correlate this with target priorities, and optimize exact locations per image independantly using exact Ballies data and nothing more.

If sector data is to be compiled and offloaded for processing of a localized area, it is via Ballies' references that data is selected.

The return of shell and boundary structures, especially of localized Ballies, provides further optimization of processing by defining an exact edge and characteristics of a remote view that includes said sector.

Such data is then used to optimize locations of distant cameras using a fraction of a pixle that correlates to a known Ballie, and thus provides accuracy beyond any more intense method.

After all data is processed (obviously post realtime analytics), further review may seek to eliminate skew and distortions in data that may not have been perfectly calibrated, and distributed systems can easily compare far distant and remote images to model even farther distant or obscured targets using only intermediate or locally visible Ballies when the entire Ballies structure has been mapped.

The final outprocessing of data includes an option of shell and Blob data sets, but strives to fit a highly accurate analytic model to the entire environment as a whole. The goal is to reduce the entire complexity of the physical system to a simple and concise mathematical generator function. If done correctly, using all methods available, the millions of images used to create billions upon billions of Blob elements and utilize infinitely many Ballies.Org reference spheres to map giant landscapes or complex industrial environments reduce down to a number generating function that easily fits on an anchient floppy disk.

Reality3d.Org has been able to reduce forrests of trees down to a seed distribution function with genetic characteristics and growth factors defined by minimal function generated mathematical attractors with resulting accuracy in excess of 12% better than the best original raw data sample.

Projects to model or reverse engineer industrial facilities and forensics resolution areas have resulted in the reverse modeling all the way down to specific operating components of industrial machines, generation of engineering data capable of accurately simulating the entire facility, and facilitation of the production of an entirely new (and optimized!) matching facility that was more preferably located in an environmentally safe area. Nothing more than image data and Ballies were used in this process. No humans were introduced into the deadly environment, and the robotics systems employed are still on site continuing hazmat cleanup.

Possible with conventional, inexpensive, accessable technology anywhere on the planet or beyond, simply because you have Ballies.Org!







More information about the grass-dev mailing list