Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

generate a tone depending on room position

$
0
0

I want to be able to have a set of tones play according to users position in a room. Can someone help me with a sketch? Research has shown I need to use the OSCP5 library and SimpleOpenNI. I want send the data to max 7 to generate the tones. I dont mind to use processing 2 ... i am mainly having trouble with the osc library. haalp please! ty!!!


Only track one face instead of 6

$
0
0

So I'm using the simplefacetracking example on the KinectPV2 (kinect for windows library)

I'm trying to limit the amount of faces it detects to just one. Context: In this program I'm trying to use the nosePos vectors (nosePos.x and nosePos.y) to see if it's inside a grid and display a picture.

The code is here: http://pastebin.com/gF7L1b0P

plz omg save me (null pointer exception)

$
0
0

I am making the Kinect Flow from [ http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/ ] i have processing 2.2.1 and i belive all the correct libraries installed, i corrected many of the issues the code has with my version of prossessing but it is giving me a Java.lang.NullPointerExeption Here is the code Main Class ` import processing.opengl.*; // opengl import SimpleOpenNI.*; // kinect import blobDetection.*; // blobs import java.util.Collections; // this is a regular java import so we can use and extend the polygon class (see PolygonBlob) import java.awt.Polygon;

    // declare SimpleOpenNI object
    SimpleOpenNI context;
    // declare BlobDetection object
    BlobDetection theBlobDetection;
    // declare custom PolygonBlob object (see class for more info)
    PolygonBlob poly = new PolygonBlob();

    // PImage to hold incoming imagery and smaller one for blob detection
    PImage cam, blobs;
    // the kinect's dimensions to be used later on for calculations
    int kinectWidth = 640;
    int kinectHeight = 480;
    // to center and rescale from 640x480 to higher custom resolutions
    float reScale;

    // background color
    color bgColor;
    // three color palettes (artifact from me storing many interesting color palettes as strings in an external data file ;-)
    String[] palettes = {
      "-1117720,-13683658,-8410437,-9998215,-1849945,-5517090,-4250587,-14178341,-5804972,-3498634",
      "-67879,-9633503,-8858441,-144382,-4996094,-16604779,-588031",
      "-16711663,-13888933,-9029017,-5213092,-1787063,-11375744,-2167516,-15713402,-5389468,-2064585"
    };

    // an array called flow of 2250 Particle objects (see Particle class)
    Particle[] flow = new Particle[2250];
    // global variables to influence the movement of all particles
    float globalX, globalY;

    void setup() {
      // it's possible to customize this, for example 1920x1080
      size(1280, 720, OPENGL);
      // initialize SimpleOpenNI object
      context = new SimpleOpenNI(this);
      if (!context.enableUser()) {
        // if context.enableScene() returns false
        // then the Kinect is not working correctly
        // make sure the green light is blinking
        println("Kinect not connected!");
        exit();
      } else {
        // mirror the image to be more intuitive
        context.setMirror(true);
        // calculate the reScale value
        // currently it's rescaled to fill the complete width (cuts of top-bottom)
        // it's also possible to fill the complete height (leaves empty sides)
        reScale = (float) width / kinectWidth;
        // create a smaller blob image for speed and efficiency
        blobs = createImage(kinectWidth/3, kinectHeight/3, RGB);
        // initialize blob detection object to the blob image dimensions
        theBlobDetection = new BlobDetection(blobs.width, blobs.height);
        theBlobDetection.setThreshold(0.2);
        setupFlowfield();
      }
    }

    void draw() {
      // fading background
      noStroke();
      fill(bgColor, 65);
      rect(0, 0, width, height);
      // update the SimpleOpenNI object
      context.update();
      // put the image into a PImage
      cam = context.userImage().get();
      // copy the image into the smaller blob image
      blobs.copy(cam, 0, 0, cam.width, cam.height, 0, 0, blobs.width, blobs.height);
      // blur the blob image
      blobs.filter(BLUR);
      // detect the blobs
      theBlobDetection.computeBlobs(blobs.pixels);
      // clear the polygon (original functionality)
      poly.reset();
      // create the polygon from the blobs (custom functionality, see class)
      poly.createPolygon();
      drawFlowfield();
    }

    void setupFlowfield() {
      // set stroke weight (for particle display) to 2.5
      strokeWeight(2.5);
      // initialize all particles in the flow
      for(int i=0; i<flow.length; i++) {
        flow[i] = new Particle(i/10000.0);
      }
      // set all colors randomly now
      setRandomColors(1);
    }

    void drawFlowfield() {
      // center and reScale from Kinect to custom dimensions
      translate(0, (height-kinectHeight*reScale)/2);
      scale(reScale);
      // set global variables that influence the particle flow's movement
      globalX = noise(frameCount * 0.01) * width/2 + width/4;
      globalY = noise(frameCount * 0.005 + 5) * height;
      // update and display all particles in the flow
      for (Particle p : flow) {
        p.updateAndDisplay();
      }
      // set the colors randomly every 240th frame
      setRandomColors(240);
    }

    // sets the colors every nth frame
    void setRandomColors(int nthFrame) {
      if (frameCount % nthFrame == 0) {
        // turn a palette into a series of strings
        String[] paletteStrings = split(palettes[int(random(palettes.length))], ",");
        // turn strings into colors
        color[] colorPalette = new color[paletteStrings.length];
        for (int i=0; i<paletteStrings.length; i++) {
          colorPalette[i] = int(paletteStrings[i]);
        }
        // set background color to first color from palette
        bgColor = colorPalette[0];
        // set all particle colors randomly to color from palette (excluding first aka background color)
        for (int i=0; i<flow.length; i++) {
          flow[i].col = colorPalette[int(random(1, colorPalette.length))];
        }
      }
    }
    `

Particle Class ` // a basic noise-based moving particle class Particle { // unique id, (previous) position, speed float id, x, y, xp, yp, s, d; color col; // color

      Particle(float id) {
        this.id = id;
        s = random(2, 6); // speed
      }

      void updateAndDisplay() {
        // let it flow, end with a new x and y position
        id += 0.01;
        d = (noise(id, x/globalY, y/globalY)-0.5)*globalX;
        x += cos(radians(d))*s;
        y += sin(radians(d))*s;

        // constrain to boundaries
        if (x<-10) x=xp=kinectWidth+10; if (x>kinectWidth+10) x=xp=-10;
        if (y<-10) y=yp=kinectHeight+10; if (y>kinectHeight+10) y=yp=-10;

        // if there is a polygon (more than 0 points)
        if (poly.npoints > 0) {
          // if this particle is outside the polygon
          if (!poly.contains(x, y)) {
            // while it is outside the polygon
            while(!poly.contains(x, y)) {
              // randomize x and y
              x = random(kinectWidth);
              y = random(kinectHeight);
            }
            // set previous x and y, to this x and y
            xp=x;
            yp=y;
          }
        }

        // individual particle color
        stroke(col);
        // line from previous to current position
        line(xp, yp, x, y);

        // set previous to current position
        xp=x;
        yp=y;
      }
    }`

PolygonBlob Class ` import java.util.Collections; // an extended polygon class with my own customized createPolygon() method (feel free to improve!) class PolygonBlob extends Polygon {

      // took me some time to make this method fully self-sufficient
      // now it works quite well in creating a correct polygon from a person's blob
      // of course many thanks to v3ga, because the library already does a lot of the work
      void createPolygon() {
        // an arrayList... of arrayLists... of PVectors
        // the arrayLists of PVectors are basically the person's contours (almost but not completely in a polygon-correct order)
        ArrayList<ArrayList> contours = new ArrayList<ArrayList>();
        // helpful variables to keep track of the selected contour and point (start/end point)
        int selectedContour = 0;
        int selectedPoint = 0;

        // create contours from blobs
        // go over all the detected blobs
        for (int n=0 ; n<theBlobDetection.getBlobNb(); n++) { Blob b = theBlobDetection.getBlob(n); // for each substantial blob... if (b != null && b.getEdgeNb() > 100) {
            // create a new contour arrayList of PVectors
            ArrayList contour = new ArrayList();
            // go over all the edges in the blob
            for (int m=0; m<b.getEdgeNb(); m++) { // get the edgeVertices of the edge
                  EdgeVertex eA = b.getEdgeVertexA(m); EdgeVertex eB = b.getEdgeVertexB(m); // if both ain't null...
                  if (eA != null && eB != null) { // get next and previous edgeVertex
                  EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
                  EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));
                  // calculate distance between vertexA and next and previous edgeVertexA respectively
                  // positions are multiplied by kinect dimensions because the blob library returns normalized values
                  float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
                  float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);
                  // if either distance is bigger than 15
                  if (dn > 15 || dp > 15) {
                  // if the current contour size is bigger than zero
                  if (contour.size() > 0) {
                    // add final point
                    contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                    // add current contour to the arrayList
                    contours.add(contour);
                    // start a new contour arrayList
                    contour = new ArrayList();
                  // if the current contour size is 0 (aka it's a new list)
                  } else {
                    // add the point to the list
                    contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
                  }
                // if both distance are smaller than 15 (aka the points are close)
                } else {
                  // add the point to the list
                  contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
                }
              }
            }
          }

        // at this point in the code we have a list of contours (aka an arrayList of arrayLists of PVectors)
        // now we need to sort those contours into a correct polygon. To do this we need two things:
        // 1. The correct order of contours
        // 2. The correct direction of each contour

        // as long as there are contours left...
        while (contours.size() > 0) {

          // find next contour
          float distance = 999999999;
          // if there are already points in the polygon
          if (npoints > 0) {
            // use the polygon's last point as a starting point
            PVector lastPoint = new PVector(xpoints[npoints-1], ypoints[npoints-1]);
            // go over all contours
            for (int i=0; i<contours.size(); i++) {
              ArrayList c = contours.get(i);
              // get the contour's first point
              PVector fp = (PVector) c.get(0);
              // get the contour's last point
              PVector lp = (PVector) c.get(c.size()-1);
              // if the distance between the current contour's first point and the polygon's last point is smaller than distance
              if (fp.dist(lastPoint) < distance) {
                // set distance to this distance
                distance = fp.dist(lastPoint);
                // set this as the selected contour
                selectedContour = i;
                // set selectedPoint to 0 (which signals first point)
                selectedPoint = 0;
              }
              // if the distance between the current contour's last point and the polygon's last point is smaller than distance
              if (lp.dist(lastPoint) < distance) {
                // set distance to this distance
                distance = lp.dist(lastPoint);
                // set this as the selected contour
                selectedContour = i;
                // set selectedPoint to 1 (which signals last point)
                selectedPoint = 1;
              }
            }
          // if the polygon is still empty
          } else {
            // use a starting point in the lower-right
            PVector closestPoint = new PVector(width, height);
            // go over all contours
            for (int i=0; i<contours.size(); i++) {
              ArrayList c = contours.get(i);
              // get the contour's first point
              PVector fp = (PVector) c.get(0);
              // get the contour's last point
              PVector lp = (PVector) c.get(c.size()-1);
              // if the first point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
              if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) { // set closestPoint to first point closestPoint = fp; // set this as the selected contour selectedContour = i; // set selectedPoint to 0 (which signals first point) selectedPoint = 0; } // if the last point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
                // set closestPoint to last point
                closestPoint = lp;
                // set this as the selected contour
                selectedContour = i;
                // set selectedPoint to 1 (which signals last point)
                selectedPoint = 1;
              }
            }
          }

          // add contour to polygon
          ArrayList<PVector> contour = contours.get(selectedContour);
          // if selectedPoint is bigger than zero (aka last point) then reverse the arrayList of points
          if (selectedPoint > 0) { Collections.reverse(contour); }
          // add all the points in the contour to the polygon
          for (PVector p : contour) {
            addPoint(int(p.x), int(p.y));
          }
          // remove this contour from the list of contours
          contours.remove(selectedContour);
          // the while loop above makes all of this code loop until the number of contours is zero
          // at that time all the points in all the contours have been added to the polygon... in the correct order (hopefully)
        }
      }
    }

`

Please someone help

How can i do to make the words that are longer apper smaller and the smaller words smaller?

$
0
0

I want to increase the size of the letter when the silhouette approaches the kinect and to decrease when she stands back. as in this video youtube.com/watch?v=h5a8UZCgs14 Thanks

Is kinect dead?

$
0
0

As a late comer in the Processing/kinect game I can't help but noticing that Kinect is kind of dead. Even the latest post on this forum related to kinect comes from september 2015 (except for this post ;-)). Information about it is fairly scattered, drivers are super old, blogposts about it are all from 2011/2012. So what do you guys think? Is this Kinect thing dead? Is everybody done playing with it? I mean, there's not really an alternative yet (as far as I can see). I'm curious about your thoughts. Kind regards.

when I export the Application,the exe. file doesn't work.

$
0
0

My sketch using the kinect device,It's working well in the sketch,but when I export the application, the executive file doesn't work. Anybody know why?

Reading and saving infrared and depth images in Kinect V2

$
0
0

Dear all, I need your help in reading and saving infrared and depth images from Kinect V2 by using Processing 3.0. Can the range image be saved into a .PLY extension? And any recommended tutorials or documentation for using Processing 3.0 with Kinect V2?

Thanks for help in advance!

kinect.getRawDepth();

$
0
0

I want use "kinect.getRawDepth();", but my processing said "The function does not exist." Can you help me?


How to robot arm gripper opening/closing with Kinect?

$
0
0

Hi everyone,

I'm using kinect my thesis. I can control 4 axes but i can't control fifth axis (gripper).

How can I do that?

thanks.

Trying Owed's KinectPhysics: cannot convert Object to KinectPhysics.CustomShape

$
0
0

http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/

I've already converted everything that I thought was necessary: enableScene to enableUser etc, so I can't figure out what else is wrong. Help would be greatly appreciated. Thanks!

    import shiffman.box2d.*;

    // Kinect Physics Example by Amnon Owed (15/09/12)

    // import libraries
    import processing.opengl.*; // opengl
    import SimpleOpenNI.*; // kinect
    import blobDetection.*; // blobs
    import toxi.geom.*; // toxiclibs shapes and vectors
    import toxi.processing.*; // toxiclibs display
    //import pbox2d.*; // shiffman's jbox2d helper library
    import org.jbox2d.collision.shapes.*; // jbox2d
    import org.jbox2d.common.*; // jbox2d
    import org.jbox2d.dynamics.*; // jbox2d

    // declare SimpleOpenNI object
    SimpleOpenNI context;
    // declare BlobDetection object
    BlobDetection theBlobDetection;
    // ToxiclibsSupport for displaying polygons
    ToxiclibsSupport gfx;
    // declare custom PolygonBlob object (see class for more info)
    PolygonBlob poly;

    // PImage to hold incoming imagery and smaller one for blob detection
    PImage cam, blobs;
    // the kinect's dimensions to be used later on for calculations
    int kinectWidth = 640;
    int kinectHeight = 480;
    // to center and rescale from 640x480 to higher custom resolutions
    float reScale;

    // background and blob color
    color bgColor, blobColor;
    // three color palettes (artifact from me storing many interesting color palettes as strings in an external data file ;-)
    String[] palettes = {
      "-1117720,-13683658,-8410437,-9998215,-1849945,-5517090,-4250587,-14178341,-5804972,-3498634",
      "-67879,-9633503,-8858441,-144382,-4996094,-16604779,-588031",
      "-1978728,-724510,-15131349,-13932461,-4741770,-9232823,-3195858,-8989771,-2850983,-10314372"
    };
    color[] colorPalette;

    // the main PBox2D object in which all the physics-based stuff is happening
    Box2DProcessing box2d;
    // list to hold all the custom shapes (circles, polygons)
    ArrayList polygons = new ArrayList();

    void setup() {
      // it's possible to customize this, for example 1920x1080
      size(1280, 720, OPENGL);

      context = new SimpleOpenNI(this);
      context.enableDepth(); //
      context.enableUser(); //

      // initialize SimpleOpenNI object
    //  if (!context.enableScene()) {
      if (!context.enableUser()) {
        // if context.enableScene() returns false
        // then the Kinect is not working correctly
        // make sure the green light is blinking
        println("Kinect not connected!");
        exit();
      } else {
        // mirror the image to be more intuitive

        context.setMirror(true);
        // calculate the reScale value
        // currently it's rescaled to fill the complete width (cuts of top-bottom)
        // it's also possible to fill the complete height (leaves empty sides)
        reScale = (float) width / kinectWidth;
        // create a smaller blob image for speed and efficiency
        blobs = createImage(kinectWidth/3, kinectHeight/3, RGB);
        // initialize blob detection object to the blob image dimensions
        theBlobDetection = new BlobDetection(blobs.width, blobs.height);
        theBlobDetection.setThreshold(0.2);
        // initialize ToxiclibsSupport object
        gfx = new ToxiclibsSupport(this);
        // setup box2d, create world, set gravity
        box2d = new Box2DProcessing(this);
        box2d.createWorld();
        box2d.setGravity(0, -20);
        // set random colors (background, blob)
        setRandomColors(1);
      }
    }

    void draw() {
      background(bgColor);
      // update the SimpleOpenNI object
      context.update();
      // put the image into a PImage
      cam = context.depthImage().get(); //

    //  cam = context.sceneImage().get();
      // copy the image into the smaller blob image
      blobs.copy(cam, 0, 0, cam.width, cam.height, 0, 0, blobs.width, blobs.height);
      // blur the blob image
      blobs.filter(BLUR, 1);
      // detect the blobs
      theBlobDetection.computeBlobs(blobs.pixels);
      // initialize a new polygon
      poly = new PolygonBlob();
      // create the polygon from the blobs (custom functionality, see class)
      poly.createPolygon();
      // create the box2d body from the polygon
      poly.createBody();
      // update and draw everything (see method)
      updateAndDrawBox2D();
      // destroy the person's body (important!)
      poly.destroyBody();
      // set the colors randomly every 240th frame
      setRandomColors(240);
    }

    void updateAndDrawBox2D() {
      // if frameRate is sufficient, add a polygon and a circle with a random radius
      if (frameRate > 29) {
        polygons.add(new CustomShape(kinectWidth/2, -50, -1));
        polygons.add(new CustomShape(kinectWidth/2, -50, random(2.5, 20)));
      }
      // take one step in the box2d physics world
      box2d.step();

      // center and reScale from Kinect to custom dimensions
      translate(0, (height-kinectHeight*reScale)/2);
      scale(reScale);

      // display the person's polygon
      noStroke();
      fill(blobColor);
      gfx.polygon2D(poly);

      // display all the shapes (circles, polygons)
      // go backwards to allow removal of shapes
      for (int i=polygons.size()-1; i>=0; i--) {
        CustomShape cs = polygons.get(i);
        // if the shape is off-screen remove it (see class for more info)
        if (cs.done()) {
          polygons.remove(i);
        // otherwise update (keep shape outside person) and display (circle or polygon)
        } else {
          cs.update();
          cs.display();
        }
      }
    }

    // sets the colors every nth frame
    void setRandomColors(int nthFrame) {
      if (frameCount % nthFrame == 0) {
        // turn a palette into a series of strings
        String[] paletteStrings = split(palettes[int(random(palettes.length))], ",");
        // turn strings into colors
        colorPalette = new color[paletteStrings.length];
        for (int i=0; i<paletteStrings.length; i++) {
          colorPalette[i] = int(paletteStrings[i]);
        }
        // set background color to first color from palette
        bgColor = colorPalette[0];
        // set blob color to second color from palette
        blobColor = colorPalette[1];
        // set all shape colors randomly
        for (CustomShape cs: polygons) { cs.col = getRandomColor(); }
      }
    }

    // returns a random color from the palette (excluding first aka background color)
    color getRandomColor() {
      return colorPalette[int(random(1, colorPalette.length))];
    }

`

` // usually one would probably make a generic Shape class and subclass different types (circle, polygon), but that // would mean at least 3 instead of 1 class, so for this tutorial it's a combi-class CustomShape for all types of shapes // to save some space and keep the code as concise as possible I took a few shortcuts to prevent repeating the same code class CustomShape { // to hold the box2d body Body body; // to hold the Toxiclibs polygon shape Polygon2D toxiPoly; // custom color for each shape color col; // radius (also used to distinguish between circles and polygons in this combi-class float r;

  CustomShape(float x, float y, float r) {
    this.r = r;
    // create a body (polygon or circle based on the r)
    makeBody(x, y);
    // get a random color
    col = getRandomColor();
  }

  void makeBody(float x, float y) {
    // define a dynamic body positioned at xy in box2d world coordinates,
    // create it and set the initial values for this box2d body's speed and angle
    BodyDef bd = new BodyDef();
    bd.type = BodyType.DYNAMIC;
    bd.position.set(box2d.coordPixelsToWorld(new Vec2(x, y)));
    body = box2d.createBody(bd);
    body.setLinearVelocity(new Vec2(random(-8, 8), random(2, 8)));
    body.setAngularVelocity(random(-5, 5));

    // depending on the r this combi-code creates either a box2d polygon or a circle
    if (r == -1) {
      // box2d polygon shape
      PolygonShape sd = new PolygonShape();
      // toxiclibs polygon creator (triangle, square, etc)
      toxiPoly = new Circle(random(5, 20)).toPolygon2D(int(random(3, 6)));
      // place the toxiclibs polygon's vertices into a vec2d array
      Vec2[] vertices = new Vec2[toxiPoly.getNumPoints()];
      for (int i=0; i<vertices.length; i++) {
        Vec2D v = toxiPoly.vertices.get(i);
        vertices[i] = box2d.vectorPixelsToWorld(new Vec2(v.x, v.y));
      }
      // put the vertices into the box2d shape
      sd.set(vertices, vertices.length);
      // create the fixture from the shape (deflect things based on the actual polygon shape)
      body.createFixture(sd, 1);
    } else {
      // box2d circle shape of radius r
      CircleShape cs = new CircleShape();
      cs.m_radius = box2d.scalarPixelsToWorld(r);
      // tweak the circle's fixture def a little bit
      FixtureDef fd = new FixtureDef();
      fd.shape = cs;
      fd.density = 1;
      fd.friction = 0.01;
      fd.restitution = 0.3;
      // create the fixture from the shape's fixture def (deflect things based on the actual circle shape)
      body.createFixture(fd);
    }
  }

  // method to loosely move shapes outside a person's polygon
  // (alternatively you could allow or remove shapes inside a person's polygon)
  void update() {
    // get the screen position from this shape (circle of polygon)
    Vec2 posScreen = box2d.getBodyPixelCoord(body);
    // turn it into a toxiclibs Vec2D
    Vec2D toxiScreen = new Vec2D(posScreen.x, posScreen.y);
    // check if this shape's position is inside the person's polygon
    boolean inBody = poly.containsPoint(toxiScreen);
    // if a shape is inside the person
    if (inBody) {
      // find the closest point on the polygon to the current position
      Vec2D closestPoint = toxiScreen;
      float closestDistance = 9999999;
      for (Vec2D v : poly.vertices) {
        float distance = v.distanceTo(toxiScreen);
        if (distance < closestDistance) { closestDistance = distance; closestPoint = v; } } // create a box2d position from the closest point on the polygon Vec2 contourPos = new Vec2(closestPoint.x, closestPoint.y); Vec2 posWorld = box2d.coordPixelsToWorld(contourPos); float angle = body.getAngle(); // set the box2d body's position of this CustomShape to the new position (use the current angle) body.setTransform(posWorld, angle); } } // display the customShape void display() { // get the pixel coordinates of the body Vec2 pos = box2d.getBodyPixelCoord(body); pushMatrix(); // translate to the position translate(pos.x, pos.y); noStroke(); // use the shape's custom color fill(col); // depending on the r this combi-code displays either a polygon or a circle if (r == -1) { // rotate by the body's angle float a = body.getAngle(); rotate(-a); // minus! gfx.polygon2D(toxiPoly); } else { ellipse(0, 0, r*2, r*2); } popMatrix(); } // if the shape moves off-screen, destroy the box2d body (important!) // and return true (which will lead to the removal of this CustomShape object) boolean done() { Vec2 posScreen = box2d.getBodyPixelCoord(body); boolean offscreen = posScreen.y > height;
    if (offscreen) {
      box2d.destroyBody(body);
      return true;
    }
    return false;
  }
}
}

// an extended polygon class quite similar to the earlier PolygonBlob class (but extending Toxiclibs' Polygon2D class instead)
// The main difference is that this one is able to create (and destroy) a box2d body from it's own shape
class PolygonBlob extends Polygon2D {
  // to hold the box2d body
  Body body;

  // the createPolygon() method is nearly identical to the one presented earlier
  // see the Kinect Flow Example for a more detailed description of this method (again, feel free to improve it)
  void createPolygon() {
    ArrayList<ArrayList> contours = new ArrayList<ArrayList>();
    int selectedContour = 0;
    int selectedPoint = 0;

    // create contours from blobs
    for (int n=0 ; n<theBlobDetection.getBlobNb(); n++) { Blob b = theBlobDetection.getBlob(n); if (b != null && b.getEdgeNb() > 100) {
        ArrayList contour = new ArrayList();
        for (int m=0; m<b.getEdgeNb(); m++) { EdgeVertex eA = b.getEdgeVertexA(m); EdgeVertex eB = b.getEdgeVertexB(m); if (eA != null && eB != null) { EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb()); EdgeVertex fp = b.getEdgeVertexA((max(0, m-1))); float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight); float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight); if (dn > 15 || dp > 15) {
              if (contour.size() > 0) {
                contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                contours.add(contour);
                contour = new ArrayList();
              } else {
                contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
              }
            } else {
              contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
            }
          }
        }
      }
    }

    while (contours.size() > 0) {

      // find next contour
      float distance = 999999999;
      if (getNumPoints() > 0) {
        Vec2D vecLastPoint = vertices.get(getNumPoints()-1);
        PVector lastPoint = new PVector(vecLastPoint.x, vecLastPoint.y);
        for (int i=0; i<contours.size(); i++) {
          ArrayList c = contours.get(i);
          PVector fp = c.get(0);
          PVector lp = c.get(c.size()-1);
          if (fp.dist(lastPoint) < distance) {
            distance = fp.dist(lastPoint);
            selectedContour = i;
            selectedPoint = 0;
          }
          if (lp.dist(lastPoint) < distance) {
            distance = lp.dist(lastPoint);
            selectedContour = i;
            selectedPoint = 1;
          }
        }
      } else {
        PVector closestPoint = new PVector(width, height);
        for (int i=0; i<contours.size(); i++) {
          ArrayList c = contours.get(i);
          PVector fp = c.get(0);
          PVector lp = c.get(c.size()-1);
          if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) { closestPoint = fp; selectedContour = i; selectedPoint = 0; } if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
            closestPoint = lp;
            selectedContour = i;
            selectedPoint = 1;
          }
        }
      }

      // add contour to polygon
      ArrayList contour = contours.get(selectedContour);
      if (selectedPoint > 0) { Collections.reverse(contour); }
      for (PVector p : contour) {
        add(new Vec2D(p.x, p.y));
      }
      contours.remove(selectedContour);
    }
  }

  // creates a shape-deflecting physics chain in the box2d world from this polygon
  void createBody() {
    // for stability the body is always created (and later destroyed)
    BodyDef bd = new BodyDef();
    body = box2d.createBody(bd);
    // if there are more than 0 points (aka a person on screen)...
    if (getNumPoints() > 0) {
      // create a vec2d array of vertices in box2d world coordinates from this polygon
      Vec2[] verts = new Vec2[getNumPoints()];
      for (int i=0; i<getNumPoints(); i++) {
        Vec2D v = vertices.get(i);
        verts[i] = box2d.coordPixelsToWorld(v.x, v.y);
      }
      // create a chain from the array of vertices
      ChainShape chain = new ChainShape();
      chain.createChain(verts, verts.length);
      // create fixture in body from the chain (this makes it actually deflect other shapes)
      body.createFixture(chain, 1);
    }
  }

  // destroy the box2d body (important!)
  void destroyBody() {
    box2d.destroyBody(body);
  }
}

figure tracking 2d for kinect v1

$
0
0

Hello! I needed the code for kinect v1, figure tracking 2d . Someone help me?

Integrate these two codes?

$
0
0

Hello Processing Forum folks,

I am working on a project that uses 3 webcams that when looked at, will play a video. I am thinking of the screens as entities that need to be acknowledged before they communicate with someone.

Everything was going dandy until I ran into two hiccups.

One is that opencv.loadImage(camright); seems to the culprit of this error "width(0) and height (0) cannot be <=0" which doesn't make any sense to me because there are opencv.loadImage(camleft); and opencv.loadImage(camcenter); before it and they don't seem to be returning the same issue.

The second hiccup is that I am trying to use keystone so that I can projection map these videos to hanging plexi... I just can't seem to figure out how to get the videos onto the keystone 'mesh'. Did that make sense?

Anyways, I am very green (taking my first class) and any help would be appreciated immensely. Especially since this project is due next week...

Below is my code... (I really couldn't get the whole code thing to work- I tried- I am sorry-if you want to help me with that too that'd be nice)

//LIBRARIES
import deadpixel.keystone.*;
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.awt.Rectangle;

//keystone stuff
Keystone ks;
CornerPinSurface surfaceleft;
CornerPinSurface surfacecenter;
CornerPinSurface surfaceright;
PGraphics offscreenleft;
PGraphics offscreencenter;
PGraphics offscreenright;

// movie object to play and pause later
//there will be three videos playing...
Movie myMovieleft;
Movie myMoviecenter;
Movie myMovieright;

OpenCV opencv;

//https://processing.org/discourse/beta/num_1221233526.html
//https://forum.processing.org/two/discussion/5960/capturing-feeds-from-multiple-webcams
Capture camleft;
Capture camcenter;
Capture camright;

String[] captureDevices;

void setup() {
  //this will println listing the webcams you need to but the number on the left in the []
  //to make them work
  printArray(Capture.list());
  background(0);
  size(2640, 1080, P3D); //this should be large enough to house all the videos
  opencv = new OpenCV(this, 160, 120);

  //keystone stuff
  ks = new Keystone(this);
  //this can change
  surfaceleft = ks.createCornerPinSurface(400, 300, 20);
  surfacecenter = ks.createCornerPinSurface(400, 300, 20);
  surfaceright = ks.createCornerPinSurface(400, 300, 20);
  // We need an offscreen buffer to draw the surface we
  // want projected
  // note that we're matching the resolution of the
  // CornerPinSurface.
  // (The offscreen buffer can be P2D or P3D)
  //P3D is telling processing to be in 3D mode
  //the number is 400, 300 is related to eachother they must be the same
  offscreenleft = createGraphics(400, 300, P3D);
  offscreencenter = createGraphics(400, 300, P3D);
  offscreenright = createGraphics(400, 300, P3D);

  //webcam stuff
  //this is turning the webcam on and to run
  //the numbers correlate to the println list
  camleft = new Capture(this, Capture.list()[3] ); //LEFT CAM IS LOGITECH HD
  //WEBCAM C310
  camleft.start();

  camcenter = new Capture(this, Capture.list()[79] ); //CENTER CAM IS LOGITECH HD
  //WEBCAM C310-1
  camcenter.start();

  camright = new Capture(this, Capture.list()[155] ); //RIGHT CAM IS LOGITECH HD
  //WEBCAM C310-2
  camright.start();

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  // movie stuff
  // load video
  myMovieleft = new Movie(this, "testvideo.mp4");

  // need to play, pause, loop
  myMovieleft.play();
  myMovieleft.pause();
  myMovieleft.loop();

  myMoviecenter = new Movie(this, "testvideo-1.mp4");

  // need to play, pause, loop
  myMoviecenter.play();
  myMoviecenter.pause();
  myMoviecenter.loop();

  myMovieright = new Movie(this, "testvideo-2.mp4");

  // need to play, pause, loop
  myMovieright.play();
  myMovieright.pause();
  myMovieright.loop();
}
void captureEvent(Capture cam) {
  cam.read();
}

void draw() {

  // open cv detect faces
  opencv.loadImage(camleft);

  // load in faces as rectangles
  Rectangle[] facesleft = opencv.detect();

  // are there faces?
  if (facesleft.length > 0) {
    // sees a face!
    myMovieleft.play();
  } else {
    // no face
    myMovieleft.pause();
  }

  // play video
  if (myMovieleft.available()) {
    myMovieleft.read();
    //offscreen.image
    image(myMovieleft, 0, 540);
  }

  opencv.loadImage(camcenter);

  // load in faces as rectangles
  Rectangle[] facescenter = opencv.detect();

  // are there faces?
  if (facescenter.length > 0) {
    // sees a face!
    myMoviecenter.play();
  } else {
    // no face
    myMoviecenter.pause();
  }

  // play video
  if (myMoviecenter.available()) {
    myMoviecenter.read();
    image(myMoviecenter, 780, height/2);
  }

// THERE IS SOMETHING WRONG WITH THIS opencv.loadImage(camright);

//RIGHT CAM
  // load in faces as rectangles
  Rectangle[] facesright = opencv.detect();

  // are there faces?
  if (facesright.length > 0) {
    // sees a face!
    myMovieright.play();
  } else {
    // no face
    myMovieright.pause();
  }

  // play video
  if (myMovieright.available()) {
    myMovieright.read();
    image(myMovieright, 1560, height/2);

    //keystone stuff
    surfaceleft.render(offscreenleft);
    surfacecenter.render(offscreencenter);
    surfaceright.render(offscreenright);
  }
}
//the save and load function for keystone
void keyPressed() {
  switch(key) {
  case 'c':
    // enter/leave calibration mode, where surfaces can be warped
    // and moved
    ks.toggleCalibration();
    break;

  case 'l':
    // loads the saved layout
    ks.load();
    break;

  case 's':
    // saves the layout
    ks.save();
    break;
  }
}
`

Video Inside Silhouette

$
0
0

I am trying to use this Git.

I am getting error data/config.json does not exist or could not be read How do I create json file or load one?

Kinect v2 Processing library for Windows

$
0
0

skeleton tracking stop working after processing 3.0 b4 when it recognizes person screen changes color.

PBox 2D

$
0
0

Hello ! I need library PBox 2D master to work with the kinect . All that meeting did not work , install , but the sketch always says missing library in question. Someone can help me?


"SimpleOpenNI.SKEL_PROFILE_ALL"

$
0
0

I have face a problem while going to generate skeleton tracking. It come error message can not find anything named "SimpleOpenNI.SKEL_PROFILE_ALL" etc... I have import SimpleOpenNI but didn't work.

I have simpleOpenNi 1.96 , kinect v1 SDK 1.8 and processing 2.2.1 .

KINECT-Playing a movie inside user's silhoutte

$
0
0

Hello Processing Community,

I have created a sketch with kinect , in which The background is black and , simple image is displayed inside the user's silhouette. My question is that, is it possible to play a movie inside the user's silhouette? If yes, then please give me your useful suggestions...

I'm trying my best for this stuff, but if I"ll get any help, it would be great....

regards...

I would like to assign the mouse function in skeletonMaskDepth for kinect v2

$
0
0

Hello! I would like to assign the mouse function in skeletonMaskDepth for kinect v2 . The drag and go over . Someone can help me?

Control the Bit Rate on Live Video export from a webcam

$
0
0

Hiya everyone!

I am currently working on an interactive art project that would require the ability to have a video feed go from a high fidelity feed to a lossy feed and then back to the high fidelity feed. Basically I want to assign depth values to compression rates using a Kinect. However, I am a little lost on how I would best go about achieving control over video output compression rates. Any ideas?

Thanks!!!

kinect error. how to fix it?

$
0
0

What's up?

I'm trying to do a project using Kinect v1 (model 1414). This error keeps showing:

There are no kinects, returning null

The thing is, sometimes it works and other times it doesn't. Any idea why this happens? Has it happened to you? Is there a way to fix it?

This is really urgent. Thanks guys.

Viewing all 530 articles
Browse latest View live