Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Ignore, sorted


Using face detection to run a function

$
0
0

`import gab.opencv.*; import processing.video.*; import java.awt.*;

Capture video; OpenCV opencv;

Minim minim; AudioPlayer song; import ddf.minim.*;

PFont font; String time = "10"; int t; int interval = 10;

void setup() { size(640, 500);

minim = new Minim(this);

song = minim.loadFile("alarm.mp3");

font = createFont("Arial", 100);

video = new Capture(this, 640/2, 480/2); opencv = new OpenCV(this, 640/2, 480/2); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start(); }

void draw() {

background(0);

t = interval-int(millis()/1000); time = nf(t , 1); if(t == 0){ song.play(); // interval+=10; }

text(time, 10, 490);

scale(2); opencv.loadImage(video);

image(video, 0, 0 );

noFill(); stroke(0, 255, 0); strokeWeight(3); Rectangle[] faces = opencv.detect(); //println(faces.length);

for (int i = 0; i < faces.length; i++) { // println(faces[i].x + "," + faces[i].y); rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); } }

void captureEvent(Capture c) { c.read(); } void songClose() { song.close(); } ` I'm fairly new to processing, what would be the best way use the face detection library to control when the alarm audio is stopped?

Thanks in advance.

This is urgent situation!! How to connect between hand postion and mouseX/Y in kinet?

$
0
0

The problem is about connection between hand position and mouseX/mouseY It does work in the computer but the particles doesn't follow my hand position in kinect when I change the mouseX/Y into just x/y. How to change the mouseX/Y as followed by hand position.

I couldn't find the right way

I need some help urgently.

Can you help me how to solve this?

Capture_2016_01_15_16_07_23_952 Capture_2016_01_15_16_07_39_689

import KinectPV2.KJoint; import KinectPV2.*;

int num= 60; PVector [] loc = new PVector [num]; PVector [] dir = new PVector [num]; float [] s = new float [num];

KinectPV2 kinect; Particle Particle = new Particle();

float zVal = 300; float rotX = PI;

void setup() {

smooth(); frameRate(130); size(1024, 768, P3D);

initVariables();

kinect = new KinectPV2(this);

kinect.enableColorImg(true);

//enable 3d with (x,y,z) position kinect.enableSkeleton3DMap(true);

kinect.init(); }

void draw() { //background(0); image(kinect.getColorImage(), 0, 0, 320, 240);

//translate the scene to the center pushMatrix(); //translate(width/2, height/2, 0); //scale(zVal); //rotateX(rotX);

ArrayList skeletonArray = kinect.getSkeleton3d();

//individual JOINTS for (int i = 0; i < skeletonArray.size(); i++) { KSkeleton skeleton = (KSkeleton) skeletonArray.get(i); if (skeleton.isTracked()) { KJoint[] joints = skeleton.getJoints();

  KJoint leftHandJoint = joints[KinectPV2.JointType_HandLeft];
  //TODO: store a reference to this position and use it to draw some graphics.
  PVector leftHandPosition = leftHandJoint.getPosition();
  Particle.draw(leftHandPosition);
  leftHandJoint.getOrientation();


  KJoint rightHandJoint = joints[KinectPV2.JointType_HandRight];
  rightHandJoint.getPosition();
  PVector rightHandPosition = rightHandJoint.getPosition();
  Particle.draw(rightHandPosition);
  rightHandJoint.getOrientation();


  //Draw body
  color col  = skeleton.getIndexColor();
  stroke(col);
}

} popMatrix();

fill(255, 0, 0); text(frameRate, 50, 50); }

void handState(int handState) { switch(handState) { case KinectPV2.HandState_Open: stroke(0, 255, 0); break; case KinectPV2.HandState_Closed: stroke(255, 0, 0); break; case KinectPV2.HandState_Lasso: stroke(0, 0, 255); break; case KinectPV2.HandState_NotTracked: stroke(100, 100, 100); break; } }

public class Particle { public void draw(PVector handPosition) { //map(x, 0, width, 50, 150);

float x = map(handPosition.x, -1, 1, 0, width);
float y = map(handPosition.y, 1, -1, 0, height);

strokeWeight(0);
point(x, y, 10);

fill (#57385c, 50);
noStroke();
rect (0, 0, width, height);


fill (#ffedbc);
int i = 0;
while (i < s.length)
{
  moveBall(loc [i], dir [i], s [i]);
  checkEdges (loc [i], dir [i]);
  drawBall( loc [i]);
  i = i + 1;
}

} } void checkEdges (PVector location, PVector direction) { if (location.x < 0) { location.x = 0; direction.x = direction.x * -1; } if (location.x > width) { location.x = width; direction.x = direction.x * -1; }

if (location.y < 0) { location.y = 0; direction.y = direction.y * -1; } if (location.y > height) { location.y = height; direction.y = direction.y * -1; } }

void moveBall (PVector location, PVector direction, float speed) { float angle = atan2 (y - location.y, x - location.x); PVector target = new PVector ( cos (angle), sin (angle)); target.mult (0.26);

direction.add (target); direction.normalize();

PVector velocity = direction.get(); // kopiert direction velocity.mult (speed); location.add (velocity); }

void drawBall (PVector location) { ellipse (location.x, location.y, 26, 26); }

void initVariables () { int i = 0; while (i < s.length) { PVector location = new PVector (width/2, height/2);

float angle = random (TWO_PI);
PVector direction = new PVector (cos (angle) * 1, sin (angle) * 1);

float speed = random (50, 26);

loc [i] = location;
dir [i] = direction;
s[i] = speed;

i = i + 1;
smooth();

} }

your helps can save my life....

Object detection and tracking

$
0
0

I'm developing a robot which use a webcam to navigate and avoid obstacles. I installed the openCV library but some instructions are missing compare to original openCV library. In particular I need to detect objects, track them until they are out of the screen and for each one evaluate the time to collision in order to modify direction of movement.

Kinect v2 + JBullet + Windows 8

$
0
0

I have a Kinect V1/JBullet/Processing 2.0b6 system running on Mac OS (multiple versions). I'm now trying to get Kinect V2 to work with JBullet using the bRigid wrapper in Processing 2 on a Windows 8.1 machine. However, the Kinect library seems to only want to work in Processing 3 and bRigid in Processing 2. The issue seems to be with a Java dependency in the Kinect library (requires 1.7, not 1.6). Has anybody found a work around for this?

Depth map to 3d models

$
0
0

Hi all, I have a project I'm making for an installation next month. I'm just seeking advice first to see if its a viable concept within processing. I want to create a virtual aquarium (in 3d) where ornaments created with data from a kinect 2 are dropped in intermittently. I've not seen many projects that use processing for 3D environments but it has a good library for implimenting the kinect. I've seen some good examples using webgl for 3d environments but there's not much documentation about it being used with kinect. So catch 22. Cheers

I want to combine two processing code for the kinect but it's doesn't work. I don't understand why ?

$
0
0

Hello I work with a group of work on a virtual reality project based on the kinect. I look for change the Background color as a function of time. We are amateur if you can help us would be with pleasure. Thank you beforehand

The first code : import SimpleOpenNI.*; import java.util.*;

SimpleOpenNI context;

int blob_array[]; int userCurID; int cont_length = 640*480; String[] sampletext = { "lu", "ma" , "me", "ve", "sa", "di", "week", "end" , "b", "c", "le" }; // sample random text

void setup(){

size(640, 480); context = new SimpleOpenNI(this); context.setMirror(true); context.enableDepth(); context.enableUser();

blob_array=new int[cont_length]; }

void draw() {

background(-1); context.update(); int[] depthValues = context.depthMap(); int[] userMap =null; int userCount = context.getNumberOfUsers(); if (userCount > 0) { userMap = context.userMap();

}

loadPixels(); background(255,260,150); for (int y=0; y<context.depthHeight(); y+=35) { for (int x=0; x<context.depthWidth(); x+=35) { int index = x + y * context.depthWidth(); if (userMap != null && userMap[index] > 0) {
userCurID = userMap[index];
blob_array[index] = 255; fill(150,200,30); text(sampletext[int(random(0,10))],x,y); // put your sample random text

      }
      else {
                blob_array[index]=0;


      }
    }
  }

}

The second code: the sketch : // import libraries import processing.opengl.*; // opengl import SimpleOpenNI.*; // kinect import blobDetection.*; // blobs

// this is a regular java import so we can use and extend the polygon class (see PolygonBlob) import java.awt.Polygon;

// declare SimpleOpenNI object SimpleOpenNI context; // declare BlobDetection object BlobDetection theBlobDetection; // declare custom PolygonBlob object (see class for more info) PolygonBlob poly = new PolygonBlob();

// PImage to hold incoming imagery and smaller one for blob detection PImage cam, blobs; // the kinect's dimensions to be used later on for calculations int kinectWidth = 640; int kinectHeight = 480; // to center and rescale from 640x480 to higher custom resolutions float reScale;

// background color color bgColor; // three color palettes (artifact from me storing many interesting color palettes as strings in an external data file ;-) String[] palettes = { "-1117720,-13683658,-8410437,-9998215,-1849945,-5517090,-4250587,-14178341,-5804972,-3498634", "-67879,-9633503,-8858441,-144382,-4996094,-16604779,-588031", "-16711663,-13888933,-9029017,-5213092,-1787063,-11375744,-2167516,-15713402,-5389468,-2064585" };

// an array called flow of 2250 Particle objects (see Particle class) Particle[] flow = new Particle[2250]; // global variables to influence the movement of all particles float globalX, globalY;

void setup() { // it's possible to customize this, for example 1920x1080 size(1280, 720, OPENGL); // initialize SimpleOpenNI object context = new SimpleOpenNI(this); if (!context.enableDepth() || !context.enableUser()) { // if context.enableScene() returns false // then the Kinect is not working correctly // make sure the green light is blinking println("Kinect not connected!"); exit(); } else { // mirror the image to be more intuitive context.setMirror(true); // calculate the reScale value // currently it's rescaled to fill the complete width (cuts of top-bottom) // it's also possible to fill the complete height (leaves empty sides) reScale = (float) width / kinectWidth; // create a smaller blob image for speed and efficiency blobs = createImage(kinectWidth/3, kinectHeight/3, RGB); // initialize blob detection object to the blob image dimensions theBlobDetection = new BlobDetection(blobs.width, blobs.height); theBlobDetection.setThreshold(0.2); setupFlowfield(); } }

void draw() { // fading background noStroke(); fill(bgColor, 65); rect(0, 0, width, height); // update the SimpleOpenNI object context.update(); // put the image into a PImage cam = context.depthImage(); // copy the image into the smaller blob image blobs.copy(cam, 0, 0, cam.width, cam.height, 0, 0, blobs.width, blobs.height); // blur the blob image blobs.filter(BLUR); // detect the blobs theBlobDetection.computeBlobs(blobs.pixels); // clear the polygon (original functionality) poly.reset(); // create the polygon from the blobs (custom functionality, see class) poly.createPolygon(); drawFlowfield(); }

void setupFlowfield() { // set stroke weight (for particle display) to 2.5 strokeWeight(2.5); // initialize all particles in the flow for(int i=0; i<flow.length; i++) { flow[i] = new Particle(i/10000.0); } // set all colors randomly now setRandomColors(1); }

void drawFlowfield() { // center and reScale from Kinect to custom dimensions translate(0, (height-kinectHeight*reScale)/2); scale(reScale); // set global variables that influence the particle flow's movement globalX = noise(frameCount * 0.01) * width/2 + width/4; globalY = noise(frameCount * 0.005 + 5) * height; // update and display all particles in the flow for (Particle p : flow) { p.updateAndDisplay(); } // set the colors randomly every 240th frame setRandomColors(240); }

// sets the colors every nth frame void setRandomColors(int nthFrame) { if (frameCount % nthFrame == 0) { // turn a palette into a series of strings String[] paletteStrings = split(palettes[int(random(palettes.length))], ","); // turn strings into colors color[] colorPalette = new color[paletteStrings.length]; for (int i=0; i<paletteStrings.length; i++) { colorPalette[i] = int(paletteStrings[i]); } // set background color to first color from palette bgColor = colorPalette[0]; // set all particle colors randomly to color from palette (excluding first aka background color) for (int i=0; i<flow.length; i++) { flow[i].col = colorPalette[int(random(1, colorPalette.length))]; } } }

The particle class

// a basic noise-based moving particle class Particle { // unique id, (previous) position, speed float id, x, y, xp, yp, s, d; color col; // color

Particle(float id) { this.id = id; s = random(2, 6); // speed }

void updateAndDisplay() { // let it flow, end with a new x and y position id += 0.01; d = (noise(id, x/globalY, y/globalY)-0.5)globalX; x += cos(radians(d))s; y += sin(radians(d))*s;

// constrain to boundaries
if (x<-10) x=xp=kinectWidth+10;
if (x>kinectWidth+10) x=xp=-10;
if (y<-10) y=yp=kinectHeight+10;
if (y>kinectHeight+10) y=yp=-10;

// if there is a polygon (more than 0 points)
if (poly.npoints > 0) {
  // if this particle is outside the polygon
  if (!poly.contains(x, y)) {
    // while it is outside the polygon
    while(!poly.contains(x, y)) {
      // randomize x and y
      x = random(kinectWidth);
      y = random(kinectHeight);
    }
    // set previous x and y, to this x and y
    xp=x;
    yp=y;
  }
}

// individual particle color
stroke(col);
// line from previous to current position
line(xp, yp, x, y);

// set previous to current position
xp=x;
yp=y;

} }

The PolygonBlob class :

// an extended polygon class with my own customized createPolygon() method (feel free to improve!) class PolygonBlob extends Polygon {

// took me some time to make this method fully self-sufficient // now it works quite well in creating a correct polygon from a person's blob // of course many thanks to v3ga, because the library already does a lot of the work void createPolygon() { // an arrayList... of arrayLists... of PVectors // the arrayLists of PVectors are basically the person's contours (almost but not completely in a polygon-correct order) ArrayList<ArrayList> contours = new ArrayList<ArrayList>(); // helpful variables to keep track of the selected contour and point (start/end point) int selectedContour = 0; int selectedPoint = 0;

// create contours from blobs
// go over all the detected blobs
for (int n=0 ; n<theBlobDetection.getBlobNb(); n++) {
  Blob b = theBlobDetection.getBlob(n);
  // for each substantial blob...
  if (b != null && b.getEdgeNb() > 100) {
    // create a new contour arrayList of PVectors
    ArrayList<PVector> contour = new ArrayList<PVector>();
    // go over all the edges in the blob
    for (int m=0; m<b.getEdgeNb(); m++) {
      // get the edgeVertices of the edge
      EdgeVertex eA = b.getEdgeVertexA(m);
      EdgeVertex eB = b.getEdgeVertexB(m);
      // if both ain't null...
      if (eA != null && eB != null) {
        // get next and previous edgeVertexA
        EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
        EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));
        // calculate distance between vertexA and next and previous edgeVertexA respectively
        // positions are multiplied by kinect dimensions because the blob library returns normalized values
        float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
        float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);
        // if either distance is bigger than 15
        if (dn > 15 || dp > 15) {
          // if the current contour size is bigger than zero
          if (contour.size() > 0) {
            // add final point
            contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
            // add current contour to the arrayList
            contours.add(contour);
            // start a new contour arrayList
            contour = new ArrayList<PVector>();
          // if the current contour size is 0 (aka it's a new list)
          } else {
            // add the point to the list
            contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
          }
        // if both distance are smaller than 15 (aka the points are close)
        } else {
          // add the point to the list
          contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
        }
      }
    }
  }
}

// at this point in the code we have a list of contours (aka an arrayList of arrayLists of PVectors)
// now we need to sort those contours into a correct polygon. To do this we need two things:
// 1. The correct order of contours
// 2. The correct direction of each contour

// as long as there are contours left...
while (contours.size() > 0) {

  // find next contour
  float distance = 999999999;
  // if there are already points in the polygon
  if (npoints > 0) {
    // use the polygon's last point as a starting point
    PVector lastPoint = new PVector(xpoints[npoints-1], ypoints[npoints-1]);
    // go over all contours
    for (int i=0; i<contours.size(); i++) {
      ArrayList<PVector> c = contours.get(i);
      // get the contour's first point
      PVector fp = c.get(0);
      // get the contour's last point
      PVector lp = c.get(c.size()-1);
      // if the distance between the current contour's first point and the polygon's last point is smaller than distance
      if (fp.dist(lastPoint) < distance) {
        // set distance to this distance
        distance = fp.dist(lastPoint);
        // set this as the selected contour
        selectedContour = i;
        // set selectedPoint to 0 (which signals first point)
        selectedPoint = 0;
      }
      // if the distance between the current contour's last point and the polygon's last point is smaller than distance
      if (lp.dist(lastPoint) < distance) {
        // set distance to this distance
        distance = lp.dist(lastPoint);
        // set this as the selected contour
        selectedContour = i;
        // set selectedPoint to 1 (which signals last point)
        selectedPoint = 1;
      }
    }
  // if the polygon is still empty
  } else {
    // use a starting point in the lower-right
    PVector closestPoint = new PVector(width, height);
    // go over all contours
    for (int i=0; i<contours.size(); i++) {
      ArrayList<PVector> c = contours.get(i);
      // get the contour's first point
      PVector fp = c.get(0);
      // get the contour's last point
      PVector lp = c.get(c.size()-1);
      // if the first point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
      if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) {
        // set closestPoint to first point
        closestPoint = fp;
        // set this as the selected contour
        selectedContour = i;
        // set selectedPoint to 0 (which signals first point)
        selectedPoint = 0;
      }
      // if the last point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
      if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
        // set closestPoint to last point
        closestPoint = lp;
        // set this as the selected contour
        selectedContour = i;
        // set selectedPoint to 1 (which signals last point)
        selectedPoint = 1;
      }
    }
  }

  // add contour to polygon
  ArrayList<PVector> contour = contours.get(selectedContour);
  // if selectedPoint is bigger than zero (aka last point) then reverse the arrayList of points
  if (selectedPoint > 0) { java.util.Collections.reverse(contour); }
  // add all the points in the contour to the polygon
  for (PVector p : contour) {
    addPoint(int(p.x), int(p.y));
  }
  // remove this contour from the list of contours
  contours.remove(selectedContour);
  // the while loop above makes all of this code loop until the number of contours is zero
  // at that time all the points in all the contours have been added to the polygon... in the correct order (hopefully)
}

} }

// import libraries import processing.opengl.*; // opengl import SimpleOpenNI.*; // kinect import blobDetection.*; // blobs import SimpleOpenNI.*; import java.util.*;

// declare SimpleOpenNI object SimpleOpenNI context;

// background color color bgColor; // three color palettes (artifact from me storing many interesting color palettes as strings in an external data file ;-) String[] palettes = { "-1117720,-13683658,-8410437,-9998215,-1849945,-5517090,-4250587,-14178341,-5804972,-3498634", "-67879,-9633503,-8858441,-144382,-4996094,-16604779,-588031", "-16711663,-13888933,-9029017,-5213092,-1787063,-11375744,-2167516,-15713402,-5389468,-2064585" };

int blob_array[]; int userCurID; int cont_length = 640*480; String[] sampletext = { "lu", "ma" , "me", "ve", "sa", "di", "week", "end" , "b", "c", "le" }; // sample random text

void setup() { // it's possible to customize this, for example 1920x1080 size(1280, 720, OPENGL); // initialize SimpleOpenNI object context = new SimpleOpenNI(this); if (!context.enableDepth() || !context.enableUser()) { // if context.enableScene() returns false // then the Kinect is not working correctly // make sure the green light is blinking println("Kinect not connected!"); exit(); } else { // mirror the image to be more intuitive context.setMirror(true); // calculate the reScale value // currently it's rescaled to fill the complete width (cuts of top-bottom) // it's also possible to fill the complete height (leaves empty sides) reScale = (float) width / kinectWidth; // create a smaller blob image for speed and efficiency } }

void draw() { // fading background noStroke(); fill(bgColor, 65); rect(0, 0, width, height);

background(-1);
 context.update();

int[] depthValues = context.depthMap(); int[] userMap =null; int userCount = context.getNumberOfUsers(); if (userCount > 0) { userMap = context.userMap(); } }

// sets the colors every nth frame void setRandomColors(int nthFrame) {

// turn strings into colors
color[] colorPalette = new color[paletteStrings.length];
for (int i=0; i<paletteStrings.length; i++) {
  colorPalette[i] = int(paletteStrings[i]);
}
// set background color to first color from palette
bgColor = colorPalette[0];

}

loadPixels();

for (int y=0; y<context.depthHeight(); y+=35) { for (int x=0; x<context.depthWidth(); x+=35) { int index = x + y * context.depthWidth(); if (userMap != null && userMap[index] > 0) {
userCurID = userMap[index];
blob_array[index] = 255; fill(150,200,30); text(sampletext[int(random(0,10))],x,y); // put your sample random text

      }
      else {
                blob_array[index]=0;


      }
    }
  }

}

How and through what has processing libraries can we put a background that changes color over time ?

$
0
0

Hello all, I have a question: How and through what has processing libraries can we put a background that changes color over time with Kinect. I work with a group on a project to explain what augmented reality, being amateur we have chosen to work with the help of the Kinect and processing to achieve a small program. If you can help us thank you in advance.

The code :

import SimpleOpenNI.*;

import java.util.*;

SimpleOpenNI context;

int blob_array[];

int userCurID;

int cont_length = 640*480;

String[] sampletext = { "lu", "ma" , "me", "ve", "sa", "di", "week", "end" , "b", "c", "le"

}; // sample random text

void setup(){

size(640, 480);

context = new SimpleOpenNI(this);

context.setMirror(true);

context.enableDepth();

context.enableUser();

blob_array=new int[cont_length];

}

void draw() {

background(-1);

 context.update();

int[] depthValues = context.depthMap();

int[] userMap =null;

int userCount = context.getNumberOfUsers();

if (userCount > 0) {

userMap = context.userMap();

}

loadPixels();

background(255,260,150);

for (int y=0; y<context.depthHeight(); y+=35) {

for (int x=0; x<context.depthWidth(); x+=35) {

int index = x + y * context.depthWidth();

  if (userMap != null && userMap[index] > 0) {

     userCurID = userMap[index];

        blob_array[index] = 255;

        fill(150,200,30);

    text(sampletext[int(random(0,10))],x,y); // put your sample random text



      }

      else {

                blob_array[index]=0;



      }

    }

  }

}


How to put a background that changes with time with kinect ?

$
0
0

Hello, Here is my code, I would want that the background is evolutionary in time: that he(it) varies of colors. You could help us we are amateur and begin with processing.

Thank you in advance.

import SimpleOpenNI.*; import java.util.*;

SimpleOpenNI context;

int blob_array[]; int userCurID; int cont_length = 640*480; String[] sampletext = { "lu", "ma" , "me", "ve", "sa", "di", "week", "end" , "b", "c", "le" }; // sample random text

void setup(){

size(640, 480); context = new SimpleOpenNI(this); context.setMirror(true); context.enableDepth(); context.enableUser();

blob_array=new int[cont_length]; }

void draw() {

background(-1); context.update(); int[] depthValues = context.depthMap(); int[] userMap =null; int userCount = context.getNumberOfUsers(); if (userCount > 0) { userMap = context.userMap();

}

loadPixels(); background(255,260,150); for (int y=0; y<context.depthHeight(); y+=35) { for (int x=0; x<context.depthWidth(); x+=35) { int index = x + y * context.depthWidth(); if (userMap != null && userMap[index] > 0) {
userCurID = userMap[index];
blob_array[index] = 255; fill(150,200,30); text(sampletext[int(random(0,10))],x,y); // put your sample random text

      }
      else {
                blob_array[index]=0;


      }
    }
  }

}

detecting motion in a videostream and finding pixel locations...

$
0
0

I wrote a sketch to find the motion outlines using opencv. I then use the getPoints function to find the PVectors to use for placing particles. It works quite well but I'm worried about performance when I try to upscale this to hd video stream. I'm fairly new to Processing so can anyone tell me if there is a more efficient way of doing this? TIA My code (not including the Particle class but that just a simple class): ` import gab.opencv.*; import processing.video.*; import java.awt.*;

Capture video; OpenCV opencv; ArrayList contourPoints = new ArrayList (); ArrayList particles; PVector loc = new PVector (0,0);

void setup() { size(640, 480, P2D); video = new Capture(this, 640, 480); opencv = new OpenCV(this, 640, 480); particles = new ArrayList();

opencv.startBackgroundSubtraction(5, 3, 0.5);

video.start(); }

void draw() { background(255); opencv.loadImage(video);

opencv.updateBackground();

opencv.dilate(); opencv.erode();

for (Contour contour : opencv.findContours()) { contourPoints = contour.getPoints(); for (int i = 0; i < contourPoints.size()-10; i=i+10) { loc = contourPoints.get(i); particles.add(new Particle(new PVector(loc.x,loc.y))); } } // Looping through backwards to delete for (int i = particles.size()-1; i >= 0; i--) { Particle p = particles.get(i); p.run(); if (p.isDead()) { particles.remove(i); } } // println(frameRate); }

void captureEvent(Capture c) { c.read(); }`

Chroma keying with kinect v2

$
0
0

I want to do chroma keying. I can do chroma keying with kinect v1. but can't do with kinect v2.

I use kinect PV2 version 0.7.5. This library is very great. unfortunately,CoordinateMapperRGBDepth, example broken, check 0.7.2 version. I wish for kinect PV2 to update.

Please tell me anothe way.

windows 8.1 prosessing 3

Implementing Arduino into Skeleton tracking with Kinect4Win Library

$
0
0

Hi, I am new to processing. I have done some simple 3d sketches and now I'm trying to use the skeleton tracking from the Kinect4Win Library to talk with an Arduino. I'm am going to use serial to talk with the Arduino via USB. I have the skeleton tracking image already.

These are the equipment that I have: Kinect 1473 (that came with my Xbox360) Windows 10 pc running Processing Kinect to USB adapter Arduino Uno

Has anyone done something like this using this library before? How to transfer this skeleton image into something that processing can understand and send a serial output to? For example: An LED lights up when I extend by arm to the right?

Thanks, Vivek

openni 1280x1024 RGB resolution.

$
0
0

Is there a method to obtain the high resolution RGB image with Processing? I am using Processing 2 with the simple openni library in combination with opencv.

Theoretically it should be possible to get the 1280*1024 RGB image with a lower refres rate of 10. This is my setup function:

void setup() { //fs = new FullScreen(this); frame.setBackground(new java.awt.Color(0, 0, 0)); background(0); frameRate(30); size (displayWidth, displayHeight); //video = new Capture(this, 640, 480);

SimpleOpenNI.start(); // print all the cams StrVector strList = new StrVector(); SimpleOpenNI.deviceNames(strList); for (int i=0; i<strList.size (); i++) println(i + ":" + strList.get(i));

RGBNI = new SimpleOpenNI(0, this, SimpleOpenNI.RUN_MODE_MULTI_THREADED); RGBNI.enableRGB(640, 480, 10);

depthNI = new SimpleOpenNI(0, this, SimpleOpenNI.RUN_MODE_MULTI_THREADED); depthNI.setMirror(true); depthNI.enableDepth(); depthNI.enableHand(); depthNI.startGesture(SimpleOpenNI.GESTURE_WAVE);

opencv = new OpenCV(this, 640, 480); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

//video.start(); //img = createImage(RGBNI.rgbImage().width, RGBNI.rgbImage().height, RGB); lastTimeCheck = millis(); captureTimer = new Timer(3000); refreshTimer = new Timer (15000); refreshTimer.start(); noCursor();

// fs.enter();

}

Kinect Head Tracking multiple users

$
0
0

Hello,

I am using the SimpleOpenNI library to track users in the room and let eyes follow the users movement.

Here is my code so far:

import SimpleOpenNI.*;

SimpleOpenNI context;

int Pupilh = 200;
int Pupilw = 150;
int Eyeh   = 500;
int Eyew   = 350;

int EyeDistance = 400;

int Sizew  = 1250;
int Sizeh  = 800;
int fps    = 30;

int ColBlack = 0;
int ColWhite = 255;

int DefaultX = 0;
int DefaultY = 0;
boolean lBodyAppeared = false;
int user = 0;


void setup() {
  size( Sizew, Sizeh );
  background( ColBlack );

  context = new SimpleOpenNI(this);

  if(!context.enableDepth( Sizew, Sizeh, fps )) {
     println("Kamera ist nicht angeschlossen.");
     exit();
     return;
  }

  context.enableUser();

  DefaultX = int( ( width / 2 ) - 175 );
  DefaultY = int( ( height / 2 ) );

  context.setMirror( false );

  frameRate( fps );
  smooth();
}


void DrawGround() {
  fill( ColWhite );
  ellipse( DefaultX, DefaultY, Eyew, Eyeh );
  ellipse( DefaultX + EyeDistance, DefaultY, Eyew, Eyeh );

  noFill();

  fill( ColBlack );
  ellipse( DefaultX, DefaultY, Pupilw, Pupilh );
  ellipse( DefaultX + EyeDistance, DefaultY, Pupilw, Pupilh );
}


void draw() {
  fill( ColWhite );
  ellipse( DefaultX, DefaultY, Eyew, Eyeh );
  ellipse( DefaultX + EyeDistance, DefaultY, Eyew, Eyeh );

  context.update();
  context.userImage();
  PVector jointPos = new PVector();
  PVector projectivePos = new PVector();
  float  confidence;

  int[] userList = context.getUsers();

  user = 0;
  if( context.getNumberOfUsers() - 1 >= 0 )
    user = userList[context.getNumberOfUsers() - 1];
  else
    user = 0;

  if( user != 0 && context.isTrackingSkeleton( user )) {
    confidence = context.getJointPositionSkeleton( user, SimpleOpenNI.SKEL_HEAD, jointPos);

    if( confidence > 0.5 ) {
      context.convertRealWorldToProjective(jointPos, projectivePos);

      fill( ColWhite );

      ellipse( DefaultX, DefaultY, Eyew, Eyeh );
      ellipse( DefaultX + EyeDistance, DefaultY, Eyew, Eyeh );

      fill( ColBlack );

      float fMoveFactorx = min( 1., abs( projectivePos.x/context.depthWidth() ));
      float fMoveFactory = min( abs( projectivePos.y/context.depthHeight() ));

      ellipse( (DefaultX - (Eyew / 2)) + (320 * fMoveFactorx), (DefaultY - 150) + (Eyeh * fMoveFactory), Pupilw, Pupilh );
      ellipse( (DefaultX + EyeDistance - (Eyew / 2)) + (320 * fMoveFactorx), (DefaultY - 150) + (Eyeh * fMoveFactory), Pupilw, Pupilh );
    }
  }
  else {
    fill( ColBlack );
    ellipse( DefaultX, DefaultY, Pupilw, Pupilh );
    ellipse( DefaultX + EyeDistance, DefaultY, Pupilw, Pupilh );
  }
}

void onNewUser(SimpleOpenNI curContext, int userId)
{
  curContext.startTrackingSkeleton(userId);
}

With the code I would like to track people if they walk in front of the Kinect device. My problem is that the tracking is not working every time. Only one user should be tracked and if the user is not tracked anymore another user should be tracked. But it doesn't work right. If you do slow motions and only one person is in front of the kinect everything works fine.

Can you please help me to improve the tracking? It's not that much of code and I need a solution until tomorrow. The Sketch is running on a Mac OSX oprating system and is a 1517 Model V1 Kinect device. This sketch will be shown on an Exhibition in the university. I user the Processing version 2.2.1.

Thank you in advance for helping me.

Processing libraries for kinect One (Xbox One kinect + pc adapter)?

$
0
0

Hello!

I've seen libraries here and there that support v1 and v2 of the kinect but those are hard to purchase lately (v2 discontinued by Microsoft). So I was wondering if there was any fonctionnal libraries working with the (kinect one + pc adapter) and processing?

Thanks for info!


Draw a different color if the hand is in a different position - Kinect

$
0
0

Hi everyone

I don't understand a part within my code. I want to make an if statement so that if the hand is in a different position of the screen, the color of the circle, which follows the hand, will change. Like this: http://simple-openni.googlecode.com/svn/site/screenshots/NiteSlider2d.jpg

But I really don't know which function I need to call upon. This is my code now:

import SimpleOpenNI.*;
//import processing.opengl.*;

SimpleOpenNI  kinect;                          // het woord "kinect" refereert in het script naar SimpleopenNI

color[]       userClr = new color[] {          // kleur per ''tracked'' persoon
  color(0, 255, 0),
  color(255, 0, 0),
  color(0, 0, 255),
  color(255, 255, 0),
  color(255, 0, 255),
  color(0, 255, 255)
};

PVector com = new PVector();
PVector com2d = new PVector();


void setup()
{

  size(1920, 1080, OPENGL);                             // kinect heeft een max output van 640x480, maar wordt nu geforceerd om 2x te vergroten met de scale in: void draw()
  kinect = new SimpleOpenNI(this);
  colorMode(HSB);
  if (kinect.isInit() == false)
  {
    println("Can't init SimpleOpenNI, maybe the camera is not connected!");
    exit();
    return;
  }

  kinect.enableDepth();                       // genereer een Depthmap
  kinect.enableUser();                        // genereer een skelet met alle joints
}

void draw()
{
  scale(3,2.25);                                    // om alsnog de grootte aan te passen wordt het beeld 2x vergroot
  kinect.update();                            // update de kinect camera per frame
  image(kinect.userImage(), 0, 0);            // laat beeld zien
  strokeWeight(7);                             // lijndikte skelet



  // -----------------------------------------------------------------
  // trigger om het skelet te maken als een persoon in beeld is

  int[] userList = kinect.getUsers();
  for (int i=0; i<userList.length; i++)
  {
    if (kinect.isTrackingSkeleton(userList[0]))
    {
      stroke(userClr[ (userList[0] - 1) % userClr.length ] );
      drawSkeleton(userList[0]);
    }
  }

  showFramerate();                             // laat de framerate zien

  // -----------------------------------------------------------------
  // hier link je objecten aan het skelet

  int[] users=kinect.getUsers();

  for (int i=0; i < users.length; i++) {
    int userId=users[0];

    PVector realData=new PVector();
    PVector projData=new PVector();

    stroke(255);
    strokeWeight(2);
    fill(255, 255, 255);

//    // link object aan hoofd
//    kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_HEAD, realData);
//    kinect.convertRealWorldToProjective(realData, projData);
//    ellipse(projData.x, projData.y, 30, 30);

    // link object aan rechter hand
    kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, realData);
    kinect.convertRealWorldToProjective(realData, projData);
    println("realData rechts", realData);

    //SO THIS PART IS WRONG AND I DON'T KNOW HOW TO MAKE THIS IF STATEMENT
    if(realData() == projData.x(width) >100 && projData.x(width) <200){
     ellipse(projData.x, projData.y, 30, 30);
    fill(10,200,100);
    }
    ellipse(projData.x, projData.y, 30, 30);
    fill(150,10,60);




    // link object aan linker hand
    kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, realData);
    //convert de data van het skeleton left hand naar realData en projData
    kinect.convertRealWorldToProjective(realData, projData);


    //realData spuugt drie getallen uit,-183.6774, -145.80629, 546.2606 de x, y, en z waarden
    println("realData links:", realData);

    ellipse(projData.x, projData.y, 30, 30);
    fill(10,150,60);
  }




//fill(0);                            // fake widescreen
//rect(0, 0, 640, 80);
//rect(0, 400, 1280, 80);
}

// -----------------------------------------------------------------
// met deze "drawLimb" objects wordt het skelet opgebouwd

void drawSkeleton(int userId)
{
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
//
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);

//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);

//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);

//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);

//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
//  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);
}

// -----------------------------------------------------------------
// SimpleOpenNI events

void onNewUser(SimpleOpenNI curkinect, int userId)
{
  println("onNewUser - userId: " + userId);
  println("\tstart tracking skeleton");

  curkinect.startTrackingSkeleton(userId);
}

void onLostUser(SimpleOpenNI curkinect, int userId)
{
  println("onLostUser - userId: " + userId);
}

void onVisibleUser(SimpleOpenNI curkinect, int userId)
{
  println("onVisibleUser - userId: " + userId);
}

// -----------------------------------------------------------------
// simpele void waarmee de framerate wordt getoond om de performance te meten

void showFramerate()
{
  fill(255);
  textSize(20);
  text("fps" + frameRate, 10, 20);
}

help with a project!

$
0
0

Hi all

bit of a novice here

I’m looking for a little guidance on a project I’m undertaking

the end goal is this

within html5 browser hosted online, you move and pan a camera around a 3D space,

the 3D space is a platform floating in the air, with 10 animated figures speaking on it, as you move between them the audio balance changes allowing you to focus more on one or the other depending on where you stand

the figures will be made by pointcloud recording of real people.

so far my thinking is this

use a kinect 2 to record the pointcloud while recording the audio (it doesn’t matter if its just the front and not full 360 i quite like that glitchy effect) do each speaker individually.

record it into brekel pro pointcloud 2, which lets you export it as an animated mesh (?)

now is where I’m stuck

should i maybe use blender or some 3D software to build the platform and put them all together into one single animation? then use processing to do the camera programming stuff?

is it easy enough to put it all together in processing?

maybe theres a better software package that already does it?

im not really confident in processing but could learn it as i go if its not too complicated

any help gratefully received

applying a mask flips or rotate the image!

$
0
0

Hi. I'm developing in Eclipse. With a kinect (depth sensor).

Currently, I'm trying to mask images. But for some reason as soon I apply a filter, the image flips (or rotates 180º). And I just can't get them back to the current position.

This happens not only with mask() method, but also with other filters (with filter method itself or the blend() one)

I'm giving you an example code:

PGraphics mappingZone;
.. mappingZone = applet.createGraphics(kinect.width, kinect.height, PApplet.P2D);
.. filteredImg = applet.createImage(kinect.width, kinect.height, PApplet.P2D); ...

  mappingZone.beginDraw();
  mappingZone.background(0);
  mappingZone.image(mappingTexture, 0, 0);
  mappingZone.mask(filteredImg);
  mappingZone.endDraw();

  applet.image(mappingZone, 0, 0, applet.width, applet.height);

I tried several things to flip the image to the correct position, like: mappingZone.scale(-1, -1);

But nothing works. However, applying a second filter usually return the image to the correct position. Any clue?

_pic-7016

Point Cloud for Kinect V2 (add shader to invert color)

$
0
0

hi everyone , firstly i'm new with point cloud processing , and i've been working with kinect v2 processing from thomas sanchez lengeling from

http://codigogenerativo.com/code/kinectpv2-k4w2-processing-library/ .

the coding is already working , its just since im new with processing , i dont really know how to add the shader to invert the color . i wanted the background to be white while the point cloud is black , can anyone help me with this?

here's the coding

https://github.com/ThomasLengeling/KinectPV2/blob/master/KinectPV2/examples/PointCloudOGL/PointCloudOGL.pde

Processing Threads, Synchronization of Variables across threads!

$
0
0

Hi all, hope someone can shed some light on this. This is my first time working with threads in processing I'm using a simple implementation of thread("somemethod"). Within somemethod() I have a globally declared boolean being altered but this is not being recognised by other functions in the main draw() loop?! simplified code below :)

simplified code shown in the post below.

Any suggestions would be greatly appreciated!

In a nutshell how do you get a variable (boolean) state change in one thread to affect the state of the same boolean in another thread? :)

Viewing all 530 articles
Browse latest View live


Latest Images