Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Help making an animation inside user silhouette

$
0
0

Hello everyone. I need your help with somehow filling up a user's silhouette with moving balls that are limited to the silhouette's outline. I have approached this in several different methods and everything failed. I would REALLY appreciate help with this. I'm using the Kinect v2 library. Thanks.


Kinect 1520 Mac Sierra - No Camera Device

$
0
0

Hello!, I know this will be answered a lot, since I'm new to processing and the whole troubleshooting thing, I need you helpScreen Shot 2017-01-25 at 11.50.48 PM

Please help me out. I tried once, then I removed libfreenect2 folder and installed again. Still not working. Thanks

How do I install the OpenKinect library on Windows?

$
0
0

I'm really struggling to figure out how to install Daniel Shiffman's OpenKinect library. I have installed the library itself through Processing (3), and have updated the drivers for the kinect to libusbK as instructed on the github page. Is there anything else I need to do, or is there something else wrong? When I run an example code, I get this error:

A library relies on native code that's not available. Or only works properly when the sketch is run as a 64-bit application.

Can anybody help? I'm finding the instructions online all a bit confusing, and in Shiffman's video tutorial he doesn't mention anything other than downloading the library through the Processing software?

(I have a v1 Kinect)

Multi-Point Interaction

$
0
0

Where might I find either a working library or reference material for processing 3.x to help with multi-point/touch for an interactive floor. I am currently using skeleton tracking from the KinectPV2 library by Thomas Sanchez Lengeling. The skeleton library supports up to six users but I am having trouble with it recognizing them as their own "mouse". Also, when I am just prototyping from a multi-touch monitor I am only able to get one touch working, as soon as I use multiple touches, my particles stop reacting for the most part. Any help and/or direction would be much appreciated. Thank you.

Problem with Kinect and macOS 10.12 Sierra: Isochronous transfer error: 1

$
0
0

Hi,

I made some programs for Kinect in the recent past and all was going well. However, since I've upgraded to Sierra, the Kinect does not work well anymore. When activating video and depth stream at the same time, the DEPTH stream skips a lot of frames and I get many of following errors:

... Isochronous transfer error: 1 [Stream 70] Expected 1132 data bytes, but got 280 [Stream 70] Expected 1132 data bytes, but got 892 Isochronous transfer error: 1 Isochronous transfer error: 1 Isochronous transfer error: 1 ...

I've also tried the included example RGBDeptTest (in Contributed Libraries/Open Kinect for Processing/Kinect_v1). It gives the same error. The same problem occurs when running the Kinect example from openframeworks.

Does anybody has tips to solve this problem?

Cheers, Blake

Map Kinect to z-coords

$
0
0

I am trying to map the kinect v2 x,y,z coords for an interactive floor. Naturally x,y coords were no problem, but I am having a tough time wrapping my mind around mapping the z coordinate. My goal is for my particles to follow the user around in a 3d space. Any suggestions? Thank you.

p.s. I am using the SkeletonDepthMask example from KinectPV2 library, it utilizes the z coord but like I said I can't figure out a way to utilize it.

[Resolved]Hand tracking with Kinect/Processing

$
0
0

Hi,

I am trying to make a project with Processing and the Kinect, I already installed the right library (I use OpenNI and FingerTracker), everything seems to work. I followed a tutorial, which showed how to make the kinect detect our hands, especially our fingers. It's this one :

import fingertracker.*;
import SimpleOpenNI.*;

FingerTracker fingers;
SimpleOpenNI kinect;
int threshold = 625;

void setup() {
  size(640, 480);


  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.setMirror(true);

  fingers = new FingerTracker(this, 640, 480);
  fingers.setMeltFactor(100);
}

void draw() {

  kinect.update();
  PImage depthImage = kinect.depthImage();
  image(depthImage, 0, 0);


  fingers.setThreshold(threshold);


  int[] depthMap = kinect.depthMap();
  fingers.update(depthMap);


  stroke(0,255,0);
  for (int k = 0; k < fingers.getNumContours(); k++) {
    fingers.drawContour(k);
  }

  // iterate over all the fingers found
  // and draw them as a red circle
  noStroke();
  fill(255,0,0);
  for (int i = 0; i < fingers.getNumFingers(); i++) {
    PVector position = fingers.getFinger(i);
    ellipse(position.x - 5, position.y -5, 10, 10);
  }


  fill(255,0,0);
  text(threshold, 10, 20);
}


void keyPressed(){
  if(key == '-'){
    threshold -= 10;
  }

  if(key == '='){
    threshold += 10;
  }
}

Everything works great, but I'm trying to make it detect when my fingers are on a certain location of the window. I am creating a picture with Photoshop, which will be displayed on the screen in Processing, and I want the JPG to have locations in which several things happen when my fingers touch these spaces (for example some objects which appear suddenly, other windows opening...). Is it possible ? How can I make it ?

Thank you for your future answers.

Kinect Physics

$
0
0

Hi everyone,

I am try to run this code made by Amnon Owed and published on http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/. I am running through many different problems. I was able to handle some of them but I am stuck now on this:

"cannot convert form Object to PVector" or "cannot convert from Object to MainSketch.CustomShape"

I am using:

  • Processing 2.2.1
  • BlobDetection library
  • SimpleOpenNI-1.96
  • Toxiclibs-0020
  • Box2D for processing
  • pbox2d (for some reason i had to add it manually in the lib folder)

  • A Kinect mod. 1473

  • OSX 10.12

I know it doesn't mean anything without looking at the code, so ... MainSketch

// import libraries
import processing.opengl.*; // opengl
import SimpleOpenNI.*; // kinect
import blobDetection.*; // blobs
import toxi.geom.*; // toxiclibs shapes and vectors
import toxi.processing.*; // toxiclibs display
import pbox2d.*; // shiffman's jbox2d helper library
import org.jbox2d.collision.shapes.*; // jbox2d
import org.jbox2d.common.*; // jbox2d
import org.jbox2d.dynamics.*; // jbox2d

// declare SimpleOpenNI object
SimpleOpenNI context;
// declare BlobDetection object
BlobDetection theBlobDetection;
// ToxiclibsSupport for displaying polygons
ToxiclibsSupport gfx;
// declare custom PolygonBlob object (see class for more info)
PolygonBlob poly;

// PImage to hold incoming imagery and smaller one for blob detection
PImage cam, blobs;
// the kinect's dimensions to be used later on for calculations
int kinectWidth = 640;
int kinectHeight = 480;
// to center and rescale from 640x480 to higher custom resolutions
float reScale;

// background and blob color
color bgColor, blobColor;
// three color palettes (artifact from me storing many interesting color palettes as strings in an external data file
String[] palettes = {
  "-1117720,-13683658,-8410437,-9998215,-1849945,-5517090,-4250587,-14178341,-5804972,-3498634",
  "-67879,-9633503,-8858441,-144382,-4996094,-16604779,-588031",
  "-1978728,-724510,-15131349,-13932461,-4741770,-9232823,-3195858,-8989771,-2850983,-10314372"
};
color[] colorPalette;

// the main PBox2D object in which all the physics-based stuff is happening
pbox2d box2d;
// list to hold all the custom shapes (circles, polygons)
ArrayList polygons = new ArrayList();

void setup() {
  // it's possible to customize this, for example 1920x1080
  size(1280, 720, OPENGL);
  context = new SimpleOpenNI(this);
  // initialize SimpleOpenNI object
  if (!context.enableUser()) {
    // if context.enableUser() returns false
    // then the Kinect is not working correctly
    // make sure the green light is blinking
    println("Kinect not connected!");
    exit();
  } else {
    // mirror the image to be more intuitive
    context.setMirror(true);
    // calculate the reScale value
    // currently it's rescaled to fill the complete width (cuts of top-bottom)
    // it's also possible to fill the complete height (leaves empty sides)
    reScale = (float) width / kinectWidth;
    // create a smaller blob image for speed and efficiency
    blobs = createImage(kinectWidth/3, kinectHeight/3, RGB);
    // initialize blob detection object to the blob image dimensions
    theBlobDetection = new BlobDetection(blobs.width, blobs.height);
    theBlobDetection.setThreshold(0.2);
    // initialize ToxiclibsSupport object
    gfx = new ToxiclibsSupport(this);
    // setup box2d, create world, set gravity
    box2d = new pbox2d (this);
    box2d.createWorld();
    box2d.setGravity(0, -20);
    // set random colors (background, blob)
    setRandomColors(1);
  }
}

void draw() {
  background(bgColor);
  // update the SimpleOpenNI object
  context.update();
  // put the image into a PImage
  cam = context.userImage().get();
  // copy the image into the smaller blob image
  blobs.copy(cam, 0, 0, cam.width, cam.height, 0, 0, blobs.width, blobs.height);
  // blur the blob image
  blobs.filter(BLUR, 1);
  // detect the blobs
  theBlobDetection.computeBlobs(blobs.pixels);
  // initialize a new polygon
  poly = new PolygonBlob();
  // create the polygon from the blobs (custom functionality, see class)
  poly.createPolygon();
  // create the box2d body from the polygon
  poly.createBody();
  // update and draw everything (see method)
  updateAndDrawBox2D();
  // destroy the person's body (important!)
  poly.destroyBody();
  // set the colors randomly every 240th frame
  setRandomColors(240);
}

void updateAndDrawBox2D() {
  // if frameRate is sufficient, add a polygon and a circle with a random radius
  if (frameRate > 29) {
    polygons.add(new CustomShape(kinectWidth/2, -50, -1));
    polygons.add(new CustomShape(kinectWidth/2, -50, random(2.5, 20)));
  }
  // take one step in the box2d physics world
  box2d.step();

  // center and reScale from Kinect to custom dimensions
  translate(0, (height-kinectHeight*reScale)/2);
  scale(reScale);

  // display the person's polygon
  noStroke();
  fill(blobColor);
  gfx.polygon2D(poly);

  // display all the shapes (circles, polygons)
  // go backwards to allow removal of shapes
  for (int i=polygons.size ()-1; i>=0; i--) {
    CustomShape cs = polygons.get(i);
    // if the shape is off-screen remove it (see class for more info)
    if (cs.done()) {
      polygons.remove(i);
      // otherwise update (keep shape outside person) and display (circle or polygon)
    } else {
      cs.update();
      cs.display();
    }
  }
}

// sets the colors every nth frame
void setRandomColors(int nthFrame) {
  if (frameCount % nthFrame == 0) {
    // turn a palette into a series of strings
    String[] paletteStrings = split(palettes[int(random(palettes.length))], ",");
    // turn strings into colors
    colorPalette = new color[paletteStrings.length];
    for (int i=0; i<paletteStrings.length; i++) {
      colorPalette[i] = int(paletteStrings[i]);
    }
    // set background color to first color from palette
    bgColor = colorPalette[0];
    // set blob color to second color from palette
    blobColor = colorPalette[1];
    // set all shape colors randomly
    for (CustomShape cs : polygons) {
      cs.col = getRandomColor();
    }
  }
}

// returns a random color from the palette (excluding first aka background color)
color getRandomColor() {
  return colorPalette[int(random(1, colorPalette.length))];
}

CustomShape_class

// usually one would probably make a generic Shape class and subclass different types (circle, polygon), but that
// would mean at least 3 instead of 1 class, so for this tutorial it's a combi-class CustomShape for all types of shapes
// to save some space and keep the code as concise as possible I took a few shortcuts to prevent repeating the same code
class CustomShape {
  // to hold the box2d body
  Body body;
  // to hold the Toxiclibs polygon shape
  Polygon2D toxiPoly;
  // custom color for each shape
  color col;
  // radius (also used to distinguish between circles and polygons in this combi-class
  float r;

  CustomShape(float x, float y, float r) {
    this.r = r;
    // create a body (polygon or circle based on the r)
    makeBody(x, y);
    // get a random color
    col = getRandomColor();
  }

  void makeBody(float x, float y) {
    // define a dynamic body positioned at xy in box2d world coordinates,
    // create it and set the initial values for this box2d body's speed and angle
    BodyDef bd = new BodyDef();
    bd.type = BodyType.DYNAMIC;
    bd.position.set(box2d.coordPixelsToWorld(new Vec2(x, y)));
    body = box2d.createBody(bd);
    body.setLinearVelocity(new Vec2(random(-8, 8), random(2, 8)));
    body.setAngularVelocity(random(-5, 5));

    // depending on the r this combi-code creates either a box2d polygon or a circle
    if (r == -1) {
      // box2d polygon shape
      PolygonShape sd = new PolygonShape();
      // toxiclibs polygon creator (triangle, square, etc)
      toxiPoly = new Circle(random(5, 20)).toPolygon2D(int(random(3, 6)));
      // place the toxiclibs polygon's vertices into a vec2d array
      Vec2[] vertices = new Vec2[toxiPoly.getNumPoints()];
      for (int i=0; i<vertices.length; i++) {
        Vec2D v = toxiPoly.vertices.get(i);
        vertices[i] = box2d.vectorPixelsToWorld(new Vec2(v.x, v.y));
      }
      // put the vertices into the box2d shape
      sd.set(vertices, vertices.length);
      // create the fixture from the shape (deflect things based on the actual polygon shape)
      body.createFixture(sd, 1);
    } else {
      // box2d circle shape of radius r
      CircleShape cs = new CircleShape();
      cs.m_radius = box2d.scalarPixelsToWorld(r);
      // tweak the circle's fixture def a little bit
      FixtureDef fd = new FixtureDef();
      fd.shape = cs;
      fd.density = 1;
      fd.friction = 0.01;
      fd.restitution = 0.3;
      // create the fixture from the shape's fixture def (deflect things based on the actual circle shape)
      body.createFixture(fd);
    }
  }

  // method to loosely move shapes outside a person's polygon
  // (alternatively you could allow or remove shapes inside a person's polygon)
  void update() {
    // get the screen position from this shape (circle of polygon)
    Vec2 posScreen = box2d.getBodyPixelCoord(body);
    // turn it into a toxiclibs Vec2D
    Vec2D toxiScreen = new Vec2D(posScreen.x, posScreen.y);
    // check if this shape's position is inside the person's polygon
    boolean inBody = poly.containsPoint(toxiScreen);
    // if a shape is inside the person
    if (inBody) {
      // find the closest point on the polygon to the current position
      Vec2D closestPoint = toxiScreen;
      float closestDistance = 9999999;
      for (Vec2D v : poly.vertices) {
        float distance = v.distanceTo(toxiScreen);
        if (distance < closestDistance) {
          closestDistance = distance;
          closestPoint = v;
        }
      }
      // create a box2d position from the closest point on the polygon
      Vec2 contourPos = new Vec2(closestPoint.x, closestPoint.y);
      Vec2 posWorld = box2d.coordPixelsToWorld(contourPos);
      float angle = body.getAngle();
      // set the box2d body's position of this CustomShape to the new position (use the current angle)
      body.setTransform(posWorld, angle);
    }
  } // display the customShape
  void display() {
    // get the pixel coordinates of the body
    Vec2 pos = box2d.getBodyPixelCoord(body);
    pushMatrix();
    // translate to the position
    translate(pos.x, pos.y);
    noStroke();
    // use the shape's custom color
    fill(col);
    // depending on the r this combi-code displays either a polygon or a circle
    if (r == -1) {
      // rotate by the body's angle
      float a = body.getAngle();
      rotate(-a); // minus!
      gfx.polygon2D(toxiPoly);
    } else {
      ellipse(0, 0, r*2, r*2);
    }
    popMatrix();
  } // if the shape moves off-screen, destroy the box2d body (important!)
  // and return true (which will lead to the removal of this CustomShape object)
  boolean done() {
    Vec2 posScreen = box2d.getBodyPixelCoord(body);
    boolean offscreen = posScreen.y > height;
    if (offscreen) {
      box2d.destroyBody(body);
      return true;
    }
    return false;
  }
}

PolygonBlob_class (where I get this error)

// an extended polygon class quite similar to the earlier PolygonBlob class (but extending Toxiclibs' Polygon2D class instead)
// The main difference is that this one is able to create (and destroy) a box2d body from it's own shape
class PolygonBlob extends Polygon2D {
  // to hold the box2d body
  Body body;

  // the createPolygon() method is nearly identical to the one presented earlier
  // see the Kinect Flow Example for a more detailed description of this method (again, feel free to improve it)
  void createPolygon() {
    ArrayList<ArrayList> contours = new ArrayList<ArrayList>();
    int selectedContour = 0;
    int selectedPoint = 0;

    // create contours from blobs
    for (int n=0; n<theBlobDetection.getBlobNb (); n++) {
      Blob b = theBlobDetection.getBlob(n);
      if (b != null && b.getEdgeNb() > 100) {
        ArrayList contour = new ArrayList();
        for (int m=0; m<b.getEdgeNb (); m++) {
          EdgeVertex eA = b.getEdgeVertexA(m);
          EdgeVertex eB = b.getEdgeVertexB(m);
          if (eA != null && eB != null) {
            EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
            EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));
            float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
            float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);
            if (dn > 15 || dp > 15) {
              if (contour.size() > 0) {
                contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                contours.add(contour);
                contour = new ArrayList();
              } else {
                contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
              }
            } else {
              contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
            }
          }
        }
      }
    }

    while (contours.size () > 0) {

      // find next contour
      float distance = 999999999;
      if (getNumPoints() > 0) {
        Vec2D vecLastPoint = vertices.get(getNumPoints()-1);
        PVector lastPoint = new PVector(vecLastPoint.x, vecLastPoint.y);
        for (int i=0; i<contours.size (); i++) {
          ArrayList c = contours.get(i);
            PVector fp = c.get(0);
            PVector lp = c.get(c.size()-1);
          if (fp.dist(lastPoint) < distance) {
            distance = fp.dist(lastPoint);
            selectedContour = i;
            selectedPoint = 0;
          }
          if (lp.dist(lastPoint) < distance) {
            distance = lp.dist(lastPoint);
            selectedContour = i;
            selectedPoint = 1;
          }
        }
      } else {
        PVector closestPoint = new PVector(width, height);
        for (int i=0; i<contours.size (); i++) {
          ArrayList c = contours.get(i);
          PVector fp = c.get(0);
          PVector lp = c.get(c.size()-1);
          if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) {
            closestPoint = fp;
            selectedContour = i;
            selectedPoint = 0;
          }
          if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
            closestPoint = lp;
            selectedContour = i;
            selectedPoint = 1;
          }
        }
      }

      // add contour to polygon
      ArrayList contour = contours.get(selectedContour);
      if (selectedPoint > 0) {
        Collections.reverse(contour);
      }
      for (PVector p : contour) {
        add(new Vec2D(p.x, p.y));
      }
      contours.remove(selectedContour);
    }
  }

  // creates a shape-deflecting physics chain in the box2d world from this polygon
  void createBody() {
    // for stability the body is always created (and later destroyed)
    BodyDef bd = new BodyDef();
    body = box2d.createBody(bd);
    // if there are more than 0 points (aka a person on screen)...
    if (getNumPoints() > 0) {
      // create a vec2d array of vertices in box2d world coordinates from this polygon
      Vec2[] verts = new Vec2[getNumPoints()];
      for (int i=0; i<getNumPoints (); i++) {
        Vec2D v = vertices.get(i);
        verts[i] = box2d.coordPixelsToWorld(v.x, v.y);
      }
      // create a chain from the array of vertices
      ChainShape chain = new ChainShape();
      chain.createChain(verts, verts.length);
      // create fixture in body from the chain (this makes it actually deflect other shapes)
      body.createFixture(chain, 1);
    }
  }

  // destroy the box2d body (important!)
  void destroyBody() {
    box2d.destroyBody(body);
  }
}

Thank you in advance for your help.


use opencv output data/frames for further analysis

$
0
0

Dear team,

I have an issue regarding the following: I use opencv to perform a first analysis of a video. I also use Daniel S. code on computer vision to analyse pixels.

I would like to take further actions on the processed videodata/frames (generated in draw() by opencv, though it keeps using the original video as input. What is the correct way to use the opencv generated data in draw (the drawed contours + created background) as input for further analysis?

import gab.opencv.*;
import processing.video.*;

Movie video;
OpenCV opencv;


////////////////////new code
int blobCounter = 0;
int framecount= 0;
int maxLife = 50;

color trackColor;
float threshold = 100;
float distThreshold = 50;   //   the treshold for making a new blob if treshold (distance of pixels) is too big



////////////////////end new code


void setup() {
  size(480, 360);
  video = new Movie(this, "drukke_straat.mp4");    // drukke straat



  opencv = new OpenCV(this, width, height);
  opencv.startBackgroundSubtraction(20, 3, 0.5);

  video.loop();
  video.play();

}


void draw() {

  if(video.width > 0 && video.height > 0){//check if the cam instance has loaded pixels else do nothing

  //frameRate(25);
  //image(video, 0, 0);
  background(0);


  opencv.loadImage(video);
  opencv.updateBackground();
  opencv.dilate();
  opencv.erode();


  for (Contour contour : opencv.findContours()) {
     fill(255,0,0);
     strokeWeight(1);
     stroke(255, 0, 0);
     contour.draw();
      }
  }

  // saveFrame("output2/frames"+framecount+".png");
    framecount ++ ;

}



 void movieEvent(Movie m) {
     m.read();
      }

bellow code is the code that uses video as input. I am not sure how to correctly feed the bellow code with the opencv generated data. Currently I have two scripts sepperatly , furst i run opencv scipt and save all the frames, then i make an mp4 file with moviemaker. after it i use it on the second scipt. Some advice would be appreciated.

   void draw() {

  if(video.width > 0 && video.height > 0){//check if the cam instance has loaded pixels else do nothing

  frameRate(8);
  image(video, 0, 0);
 //background(0);


ArrayList<Blob> currentBlobs = new ArrayList<Blob>();    // new array is created previous current array is cleared

  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float d = distSq(r1, g1, b1, r2, g2, b2);

      if (d < threshold*threshold) {       // treshold* treshhold as d is operated in the same way (squared)

making a particle emitter from kinect2 depth data

$
0
0

Hey there.

I'm trying to make a particle emitter out of kinect2 depth data, for a dance piece. Essentially what I want to do is extract the body shape, and make that the particle emitter.

What I currently have coded is, crude, but working, but also killing my machine (a mid 2012 Macbook pro, 2.9GHz processor, 8 GB 1600 MHz DDR3 memory) and I am yet to add in extra detail.

My code essentially does this - gets the kinect depth raw data, loops through the data if the pixel is in the depth range specified, its counted as an emitter and added to an array list.

These pixel coordinates are then checked against an arraylist of particles. If there is no particle at that point, then add one to the arrayList.

THEN, i want to start the particle decaying if it is not in the emitter area, so I loop again through my array of particles,

if particle is not in my array of emiter locations - then do things.

THEN loop through and update and draw the particles

Can some one offer advice as to how this can be optimised? Code is below in all its seriously messy glory.

import org.openkinect.processing.*;


Kinect2 kinect2;
PImage img;

float RANGE_MIN = 1000;
float RANGE_MAX = 2000;
int KINECT_WIDTH= 515;
int KINECT_HEIGHT = 424;

// define all the variables
ParticleEmitter particleEmitter;
//constants - colors
color BACKGROUND = color(0, 0, 0, 100);

// set size.
void setup() {
  size(512,424);
  kinect2 = new Kinect2(this);
  //initialise the depth feed
  kinect2.initDepth();
  // initialise the whole device.
  kinect2.initDevice();

  particleEmitter = new ParticleEmitter(kinect2);

}

void draw(){
  background(BACKGROUND);

  //particleEmitter.display();
  //fill(255);
  //ellipse(10, 10, 1, 1);
  particleEmitter.refresh();
  particleEmitter.emit();
}

// -----------------
// ParticleEmitter class
// - will eventually read in data from the kinect and extract the performers body shape
// - using this as the emitter.
// -----------------
import java.util.Iterator;

class ParticleEmitter {
  PImage img;

  //constants
  int PARTICLE_DISTANCE = 5; // distance between particles in the emitter.

  //particles
  ArrayList<Particle> particles = new ArrayList<Particle>() ;
  ArrayList<PVector> emitter = new ArrayList<PVector>() ;

  //kinect
  Kinect2 kinect;

  // ----
  //constructor
  // ----
  ParticleEmitter(Kinect2 _kinect){
    kinect = _kinect;
    //create a blank image to put all the depth info onto.
     img = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
  }


  // ----
  // Refresh
  //
  // clears the arraylist of locations where particles can spawn from
  // creates new array list of locations where particles can spawn from
  // ----
  void refresh(){
    emitter.clear();
     img.loadPixels(); //need to operate on the pixels of the image.
                    //goes hand in hand with image.updatePixels();

    //get depth data from the kinect
    int [] depth = kinect2.getRawDepth();  //this is an array of ints relating to pixels in the depth image.
                                         // 0 - 4500 --> 0 relating to distance in mm;
    //first - establish the locations where particles can grow
    for (int x = 0; x < kinect2.depthWidth; x++){
      for(int y = 0; y < kinect2.depthHeight; y++){
        int offset  = x + (y * kinect2.depthWidth);
        int d = depth[offset];
        if (d > RANGE_MIN && d < RANGE_MAX){
          PVector location = new PVector(x, y);
          emitter.add(location);
          //does it exist already?
          if (!has(location)){
            float temp = random(0,90000);
            if (temp > 89990){
              Particle newParticle = new Particle(location);
              particles.add(newParticle);
            }
          }
        }
      }
    }
  }

  // ----
  // emit
  //
  // updates the particles.
  // for each particle, if it is not in the emitter area, then begin its decomposition
  // if it is in the emitter area - do nothing
  //
  void emit(){
    Iterator <Particle> it = particles.iterator();
    while (it.hasNext()) {
      Particle p = it.next();
      //loop through emitter area - is it in or out?
      boolean kill = true;
      for (PVector emitLocation: emitter){
        //if theres a match make it false
        if (p.location.dist(emitLocation) < PARTICLE_DISTANCE){
          kill = false;
        }
      }
      p.kill(kill);
      p.run();
      if (p.isDead()) {
        it.remove();
      }

    }
  }
  // ---
  // Has
  //
  // checks to see if a particle exists within this location
  // ---
  boolean has(PVector location){
    boolean has = false;
    for (Particle particle : particles){
        if(location.dist(particle.location) < PARTICLE_DISTANCE){
          has = true;
        }
    }
    return has;
  }
}

// ----------
// Particle class
//
// defines the individual particles.
// methods for behaviour.
//
// ----------
class Particle {
  //constants
  float MAX_LIFESPAN = 255;
  color PARTICLE_COLOR = color(255,255,255);

  // force and direction
  PVector location;
  PVector velocity;
  PVector acceleration;

  //lifespan and activity
  boolean dying;
  float lifespan;
  float mass; //may or may not be used...




  // ---
  // constructor
  // ---
  Particle(PVector _location){
    location = _location;

    acceleration = new PVector(0,0.1);//downward
    velocity = new PVector(random(-1,1),random(-1,2));

    dying = false;
    //lifespan = RESET_LIFESPAN;
    lifespan = 10.0;
  }

  void run(){
    update();
    display();
  }

  void update(){
    println(lifespan);
    if (dying){
      velocity.add(acceleration);
      location.add(velocity);
      lifespan -= 2.0;
    }
    else{
      if (lifespan < MAX_LIFESPAN){
        lifespan += 4.0;
      }
    }
  }

  void display(){
    noStroke();
    fill(PARTICLE_COLOR, lifespan);
    ellipse(location.x, location.y, 3, 3);
  }

  boolean isDead() {
    if (lifespan < 0) {
      return true;
    } else {
      return false;
    }
  }

  void kill(boolean _dying){
    dying = _dying;
  }

}

Processing libraries for kinect One (Xbox One kinect + pc adapter)?

$
0
0

Hello!

I've seen libraries here and there that support v1 and v2 of the kinect but those are hard to purchase lately (v2 discontinued by Microsoft). So I was wondering if there was any fonctionnal libraries working with the (kinect one + pc adapter) and processing?

Thanks for info!

OpenCV Facial Recognition

$
0
0

Looking for any info about doing Facial Recognition in Processing (not to be confused with Facial Detection).

Greg Borenstein's lib here gives basic access to OpenCV (and is great BTW!):

https://github.com/atduskgreg/opencv-processing

It's mentioned in the libs introduction that " In addition to using the wrapped functionality, you can import OpenCV modules and use any of its documented functions".

OpenCV does have a FaceRecognition module, here:

http://docs.opencv.org/3.0-beta/modules/face/doc/facerec/facerec_api.html#FaceRecognizer

If anyone has gone down this path, or knows a general direction to travel, the help would be greatly appreciated.

Colored window instead of skeleton with Thomas Sanchez skeleton3D sketch

$
0
0

Hello ! I am encountering some difficulties with Thomas Sanchez Lengeling's processing sketch "skeleton3D" in the "OpenCV-Processing" library. I've a kinect V2 that seems to be well installed since it works like a charm with others sketches. But when i run this sketch i've a whole colored window instead of seeing the skeleton when detected. I've no error messages in the console window in Processing when running the sketch. I work on W10 and Processing 3.2.4 Any idea ?

fisica array

$
0
0

Why a poly object to create in the void setup(), or repeated drawing;I want to join an activity poly object.

`import fisica.*;

FWorld world; FPoly poly;

PVector[] xy = new PVector[5];

void setup() { size(640, 480);

Fisica.init(this); world = new FWorld(); world.setGravity(0, 800); world.setEdges(); world.remove(world.left); world.remove(world.right); world.remove(world.top); world.setEdgesRestitution(0.5);

world.add(poly);//// newBody();//// poly.removeFromWorld();////

for (int i = 0; i<xy.length; i++) { xy[i] = new PVector(random(100,width-100), random(100,height-100)); } }

void draw() { background(255); world.step(); world.draw(this); drawb(); poly.draw(this); }

void drawb() { for (int i = 0; i< xy.length; i++) { poly.vertex(xy[i].x+random(25), xy[i].y+random(25)); } }

void newBody() { poly = new FPoly(); poly.setPosition(width/8, height/8); poly.setStrokeWeight(1); poly.setStroke(0, 100); poly.setNoFill(); poly.setDensity(10); poly.setRestitution(0); }`

How do I insert video certain distance from kinect so movement closer to it distorts the video?

$
0
0

How do I superimpose a video at a certain distance from the kinect so that movement closer to the kinect than that distance distorts the video?

It would be great if the video could 'stretch' over any shapes closer to the kinect that the virtual distance the video is away from the kinect.

I'm exploring some options for live visuals for my band. We have a video that runs through our entire live set and I'd like at some interactivity. I'm quite new to Processing and I've been learning some of the basics recently but I know this solution to my question is no doubt a very complex one. I'd appreciate any help this forum can offer.

I'm running Processing 3 on Mac OS X and I'm using a Kinect Model 1414.


Kinect Projection Mapping

$
0
0

Hello, I'm currently doing a university project on 'Motion Tracking Projection Mapping'. I'm currently focusing on a 'markerless' system using the kinect V1. For the end result I basically want to be able to project on a moving person and/or objects. I'm currently using Jon Bellona's processing SimpleKinect codes in order to skeleton track and then use then take the OSC coordinate data from the joint positions. I then hope to project some effects to these co ordinates, not necessarily 100% on the person/objects but to project effects to skeleton data points.

I've already looked at at stuff like kinect projector toolkit, but I need scientifically measure something for my assignment, so i'm trying to measure and compare the accuracy of markerless and 'markers' motion projection mapping. I'm currently using Processing 2 for Mac OSX.

Does anyone have any experience or has seen anything similar to what I am trying to achieve? Anything to do with OSC skeleton data and projection that has been done before?

Any suggestions or examples would be a huge help to me.

Thanks a lot!

JpBellona's Simple Kinect not displaying anything

$
0
0

Hi, I'm trying to use Jon Bellona's Simple Kinect software along with processing code. I am running a skeleton tracking code along with simple kinect v.1.0.2 with my kinect connected to my laptop. The OSC port is set to 8000 and the address is host. In my processing code it says that oscP5 is running but listening at 3301.

I'm basically just trying to get the skeleton tracking data and display on my computer but at the moment when sending all joints nothing is being displayed on my screen. My kinect is connected and the red infrared light is on so its receiving something. Is it a case of changing the port, if so how or is there another problem or another piece of code I need to run along side.

Any help would be greatly appreciated.

Thanks!

could not load video

$
0
0
        import gab.opencv.*;
        import processing.video.*;

        Movie video;
        OpenCV opencv;

        void setup() {
          size(800, 800);
          video = new Movie(this, "a.mp4");
          opencv = new OpenCV(this, 800, 800);

          opencv.startBackgroundSubtraction(5, 3, 0.5);

          video.loop();
          video.play();
        }

        void draw() {
          image(video, 0, 0);
          opencv.loadImage(video);

          opencv.updateBackground();

          opencv.dilate();
          opencv.erode();

          noFill();
          stroke(255, 0, 0);
          strokeWeight(3);
          for (Contour contour : opencv.findContours()) {
            contour.draw();
          }
        }

        void movieEvent(Movie m) {
          m.read();
        }

Skeletal data to grid data

$
0
0

Sorry if this has been asked before, but I couldn't find anything. I'm quite new to both Kinect and programming, so it would be great if someone can help me out here.

In my processing sketch I have created a grid of 4*4 tiles, each having an assigned value of "0". I want to change this value to "1" if a human (or limb) is detected in that part of the grid.

This would mean if a head is spotted in the leftupper corner, tile #1 would get a value of "1". If a foot is also spotted in the rightlower corner, tile#16 would get a value of "1'. This results in a string of "1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1". This data is send to Arduino through serial commucation (I'm planning to use it to create silhouettes in LEDS).

I'm currently using the KinectPV2 library. I think I have to use the rawData of this library, where each value that is NOT 255 tells me that a limb is found there (I assume rawData gives me the 'coordinates' of where the limb is found, but I cant test that, see my question further below. If I'm wrong please correct me!).

To check this, I write this: int [] rawData = kinect.getRawBodyTrack();

  for (int i = 0; i < rawData.length; i++){
    if (rawData[i] != 255){
      humanPresent = true;
      println(rawData[i]);
    }
  }

However, the problem seems to be that rawData is so extremely big (217088 values at minimum), that I can't run a loop like that without crashing Processing. This brings me to my question: how could I check rawData in an efficient way, or, what would be a better way to change grid tile values based on the position of limbs?

Thanks!

Full code: import processing.serial.*; import KinectPV2.*;

KinectPV2 kinect;

//Grid
Table table;
int cols = 4; //grid Width (amount of valves)
int rows = 2 ;//gridHeight, amount of "pixels" we want in the grid, vertically
int gridTiles; //amount of tiles
int tileValue[];

//Enddata
String allGridValues = "";

//Serial
Serial myPort;

//Misc
boolean humanPresent = false;


void setup(){
  size(1200, 800);
  kinect = new KinectPV2(this);
  kinect.enableBodyTrackImg(true); //enables tracking bodies
  kinect.enableColorImg(true); //enables visualising the full video in color for testing purposes
  kinect.enableDepthImg(true); //enables black white image for testing purposes
  kinect.init();

  gridTiles = (rows*cols);
  setTable();

  //Serial comm
  //printArray(Serial.list()); //list available ports
  String portName = Serial.list()[0];
  myPort = new Serial(this, portName, 9600);

}


void draw(){
  clear(); //clears background
  background(255); //sets background to full white

  image(kinect.getColorImage(), 0, 0, width, height); //Full Color
  //image(kinect.getDepthImage(), 0, 0, width, height); //Black White

  //PImage kinectImg = kinect.getBodyTrackImage();
  //image(kinectImg, 512, 0);
  int [] rawData = kinect.getRawBodyTrack();

  for (int i = 0; i < rawData.length; i++){
    if (rawData[i] != 255){
      humanPresent = true;
      println(rawData[i]);
      //break
    }
  }
}

void setTable(){
  table = new Table();
  table.addColumn("tile");
  table.addColumn("value");

  tileValue = new int[gridTiles];

  for (int i = 0; i < gridTiles; i++){
    tileValue[i] = 0;
    TableRow newRow = table.addRow();
    newRow.setInt("tile", i);
  }
}

Problems with PApplet

$
0
0

Hi guys

Apologies if my terminology is in any way incorrect but my processing skills are a bit rusty. I've been trying to get on open source toolkit for Kinect (found here: https://github.com/genekogan/KinectProjectorToolkit) to work. This was built for Processing(2.x), however, finding libraries like simpleOpenNI which are compatible has been a bit of a challenge. I've resolved to trying to update the code for Processing(3.x) and for the most part everything seems to be working fine.

I did however come across some documentation on the newest version of Processing which states that Applets won't function in the same way anymore, which is where my problem seems to lie. The code is as follows:

public class ChessboardFrame extends JFrame { public ChessboardFrame() { setBounds(displayWidth,0,pWidth,pHeight); ca = new ChessboardApplet(); add(ca); removeNotify(); setUndecorated(true); setAlwaysOnTop(false); setResizable(false);
addNotify();
ca.init(); setVisible(true); } }

public class ChessboardApplet extends PApplet { public void setup() { noLoop(); } public void draw() { } }

void saveCalibration(String filename) { String[] coeffs = getCalibrationString(); saveStrings(dataPath(filename), coeffs); }

void loadCalibration(String filename) { String[] s = loadStrings(dataPath(filename)); x = new Jama.Matrix(11, 1); for (int i=0; i<s.length; i++) x.set(i, 0, Float.parseFloat(s[i])); calibrated = true; println("done loading"); }

When I try to run it, I get 2 errors saying, ' The function "add()" expects parameters like: "add(Component)" ' and ' The function "init()" does not exist ' referring to lines 5 and 11.

Anyone got any advice as to what I should do to solve the problem?

Thanks

Viewing all 530 articles
Browse latest View live